I'm now interested enough to try run one myself and see how it suits my personal workflow. So I have a few questions:
1) How can I set up a LLM locally with good effort/reward ratio? I don't want to spend hours setting up something unreliable that needs constant modification - moreso something I can just interact with easily from a web UI/CLI when I need to.
2) Is there an easy way to keep up to date with LLMs so I can update to newer models as they become popular to get the best results?
Note that I'm only looking for self hosted, Linux compatible solutions!
Though for "personal workflow", unless you want to be able to play with the internals of the models or are worried about privacy, I'd just use ChatGPT (in fact I do, despite having llama.cpp setup to run various models, I always use ChatGPT for personal stuff and programming question)
2) /r/localllama is good, and then also the “open llm leaderboard” and the “lmsys llm leaderboard”