HACKER Q&A
📣 lulzury

Browser-Based LLM Models?


Does anyone know if there are there any plans for browsers to natively integrate LLMs, LLM APIs, or LLM models like Llama for local use by web applications?

I feel there's a large opportunity here for a more privacy-friendly, on-device solution that doesn't send the user's data to OpenAI.

Is RAM the current main limitation?



👤 throwaway425933
Every big tech company is trying to do this. FB (through whatsapp), Google (through chrome/Android), Apple (through Safari/iOS/etc). As soon as they meet their internal metrics, they will release these to public

👤 FrenchDevRemote
"Is RAM the current main limitation?"

(V)RAM+processing power+storage(I mean what kind of average user wants to clog half their hard drive for a subpar model that output 1 token a second?)


👤 Crier1002
check out https://github.com/mlc-ai/web-llm

IMO the main limitation is access to powerful GPUs for running models locally and the size of some models causing UX problems with cold starts