brew install ollama
brew services start ollama
ollama pull mistral
Ollama you can query via http. It provides a consistent interface for prompting, regardless of model.
https://github.com/ollama/ollama/blob/main/docs/api.md#reque...
We also run a Mac Studio with a bigger model (70b), M2 ultra and 192GB ram, as a chat server. It's pretty fast. Here we use Open WebUI as interface.
Software wise Ollama is OK as most IDE plugins can work with it now. I personally don't like the go code they have. Also some key features are missing from it that I would need and those are just never getting done, even as multiple people submitted PRs for some.
LM Studio is better overall, both as server or as chat interface.
I can also recommend CodeGPT plugin for JetBrains products and Continue plugin for VSCode.
As a chat server UI as I mentioned Open WebUI works great, I use it with together ai too as backend.
it's pretty 'idiot proof', if you ask me.
What do you do with one of these?
Does it generate images? Write code? Can you ask it generic questions?
Do you have to 'train' it?
Do you need a large amount of storage to hold the data to train the model on?
https://python.langchain.com/docs/get_started/introduction
I like llangchain but it can get complex for use cases beyond a simple "give the llm a string, get a string back". I've found myself spending more time in llangchain docs than working on my actual idea/problem. However, it's still a very good framework and they've done an amazing job IMO.
edit: "Are options ‘idiot proof’ yet?" - from my limited experience, Ollama is about as straightforward as it gets.
I've got an Ollama instance running on a VPS providing a backend for a discord bot.
Edit: using a P40, whisper as ASR
Together Gift It solves the problem the way you’d think: with AI. Just kidding. It solves the problem by keeping everything in one place. No more group texts. There are wish lists and everything you’d want around that type of thing. There is also AI.
The thing to watch out for (if you have exposable income) is new RTX 5090. Rumors are floating they are going to have 48gb of ram per card. But if not, the ram bandwidth is going to be a lot faster. People who are on 4090 or 3090s doing ML are going to go to those, so you can pick up a second 3090 for cheap at which point you can load higher parameter models, however you will have to learn hugging face Accelerate library to support multi gpu inference (not hard, just some reading trial/error).
Guess it's going to be a variant of Llama or Grok.
Ease? Probably ollama
Speed and you are batching on gpu? vLLM
gpt4all is decent as well, and also provides a way to retrieve information from local documents.
Seriously, this is the insane duo that can get you going in moments with chatgpt3.5 quality.
For squeezing every bit of performance out of your GPU, check out ONNX or TensorRT. They're not exactly plug-and-play, but they're getting easier to use.
And yeah, Docker can make life a bit easier by handling most of the setup mess for you. Just pull a container and you're more or less good to go.
It's not quite "idiot-proof" yet, but it's getting there. Just be ready to troubleshoot and tinker a bit.
Source code: https://github.com/leoneversberg/llm-chatbot-rag
flox will also install properly accelerated torch/transformers/sentence-transfomers/diffusers/etc: they were kind enough to give me a preview of their soon-to-be-released SDXL environment suite (please don’t hold them to my “soon”, I just know it looks close to me). So you can do all the modern image stuff pretty much up to whatever is on HuggingFace.
I don’t have the time I need to be emphasizing this, but the last piece before I’m going to open source this is I’ve got a halfway decent sketch of a binary replacement/conplement for the OpenAI-compatible JSON/HTTP one everyone is using now.
I have incomplete bindings to whisper.cpp and llama.cpp for those modalities, and when it’s good enough I hope the bud.build people will accept it as a donation to the community managed ConnectRPC project suite.
We’re really close to a plausible shot at open standards on this before NVIDIA or someone totally locks down the protocol via the RT stuff.
edit: I almost forgot to mention. We have decent support for multi-vendor, mostly in practice courtesy of the excellent ‘gptel’, though both nvim and VSCode are planned for out-of-the-box support too.
The gap is opening up a bit again between the best closed and best open models.
This is speculation but I strongly believe the current opus API-accessible build is more than a point release, it’s a fundamental capability increase (though it has a weird BPE truncation issue that could just be a beta bug, but it could hint at something deeper.
It can produce verbatim artifacts from 100s of thousands of tokens ago and restart from any branch in the context, takes dramatically longer when it needs to go deep, and claims it’s accessing a sophisticated memory hierarchy. Personally I’ve never been slackjawed with amazement on anything in AI except my first night with SD and this thing.