HACKER Q&A
📣 etewiah

Does anyone have experience running LLMs on a Mac Mini M2?


I would like to run an LLM locally so I can be absolutely sure that the data I send to it is private. Does anyone have experience doing this with the latest Mac Mini? Any insights will be very much appreciated. Thanks.


  👤 mtmail Accepted Answer ✓
I ran the 13GB https://simonwillison.net/2023/Nov/29/llamafile/ on an M3 without issues. 40 tokens per second output.

👤 paeselhz
I bought a Mac Mini M2 last year to start playing around with some personal projects, and I did some tests using LM Studio running Mixtral models with pretty good throughput, I also tested Open AI's whisper models to do some transcriptions and those ran fine as well.

I do, however, recommend that you upgrade the RAM, 8GB is barely enough as is, so getting at least 16GB would be better. (I don't recommend upgrading the SSD though, since because of Thunderbolt 4 you can have a fast external SSD for half the price that Apple charges for storage).


👤 geoah
download km studio and you're done. depending on the amount of ram you have you can run different models. check out the mixtral 8x7b ones for generally good results. https://lmstudio.ai/