fintuneing seems to be out of fashion (if it were really ever in fashion), but I still see folks like Karpathy mention reaching for it as a tool.
But is anyone in any business capacity on here doing that? Are you finetuning any remote LLM or something self-hosted? What for?
I’m just curious where the line is of “oh this is better encoded in the models weights rather than in RAG/thinking over context stuff it needs to figure out.