HACKER Q&A
📣 vitiral

Help me understand where LLMs for coding is going


I played with one the other day (not at liberty to say which) and while I find the tech impressive, it was also finiky and required a lot of fiddling.

I'm imagining making scripts/pipelines/etc which try to transform or write code, and I don't see how this can scale. Once LLMs are "good" I could see using it for small one-off tasks, but could you really make a _tool_ that you have to _maintain_ by gluing prompts together? What happens when the LLM gets "upgraded" and your finiky language now requires _different_ finiky language.

It feels like depending on a service who's API contract not only isn't upheld, but can never be upheld.


  👤 sinuhe69 Accepted Answer ✓
I imagine it’s as hard as keeping the same style, characters, expression for your storyboard using image generative AI. While they are good for one-off shots, keeping the scenes consistently fit a storyboard is something different. Then people have invented control-net such as DepthNet and Pose, which will allow a more fine-grained control of the image-generation process but not totally as working with a human artist. Thus the more productive approach is still using Photoshop/Illustrator plus Generative AI to quickly filling parts of the image one wants. I think it’s the same with code generation. Human still has to control and organize the structure of the program as a whole. Part of the code can be generated by AI, but one has to run Langchain and other verification processes to make sure the code is usable and consistent with the whole.

The whole AI field is revolving around tweaking and fine tuning, so I don’t think the finicky thing will go away any soon. Looking back at the whole automation/expert/AI history, we can see that specialization is always a more promising approach and I believe it’s the way to go for current AI development as well; not replacing humans but filling in the niche and automate the long and boring stuffs.