HACKER Q&A
📣 FezzikTheGiant

Running LLMs Locally


What's the best way to build an app around local LLMs. How can I educate m myself on this?


  👤 ttyprintk Accepted Answer ✓
The “around” part of your question hints that you may want to see the range of options possible with LangChain. Instead, for brainstorming, I’d recommend the description of what Symbolic AI aims to support:

https://github.com/ExtensityAI/symbolicai?tab=readme-ov-file...

I think what you’ll find is that some applications are very capable locally, like Whisper.

A lot of plugins expect to work with the llama.cpp family. Nowadays, that’s HuggingFace TGI: https://huggingface.co/blog/tgi-messages-api

So your application could speak OpenAI api, and you’d run HuggingFace TGI on your hardware for testing and comparison.