Are small local LLMs good at coding?
I deal with the professional LLMs, of course, but I'm really intrigued by the possibility of local coding offline. I've got a MacBook Air M4 16gb. Does it have any chance at all of doing coding? NOTE: I am not too worried about the context window because the way I work is very targeted and surgical. I'll have it look at one file and have it do something very exact to that file.
They can be. I've done some drift detection work against local models and for the most part, they do ok. I think there's always room for an augmented approach where local models can handle programmatic parsing and structure and large models can handle actual coding routines. I try to use witness coding using local+api where possible to see if there's capabilities that can be caught one side or the other.