HACKER Q&A
📣 nate

Are You Using Finetuning?


How? For what?

fintuneing seems to be out of fashion (if it were really ever in fashion), but I still see folks like Karpathy mention reaching for it as a tool.

But is anyone in any business capacity on here doing that? Are you finetuning any remote LLM or something self-hosted? What for?

I’m just curious where the line is of “oh this is better encoded in the models weights rather than in RAG/thinking over context stuff it needs to figure out.


  👤 BoredPositron Accepted Answer ✓
We mainly do full finetunes on diffusion models and their text encoders like z-image, flux2 klein to adapt them to our clients visual style and train LoRas for people and products. The quality goes up immensely if the model has a better grasp of professional visual terms. Training the right kind of leather or plastic (mainly for the pattern) helps when you are scaling to 12-16k and want 99.9% reproduction, everything becomes a texture at that size and if you don't have them trained it's a mess.