Why are AI code editors not continuously working?
I am getting more and more baffled by the incompetency of AI code editor developers. Why not simply have a "janitor" always running in the background, doing exactly the right refactorings, ensuring clean, well-tested code etc.? It would help so much against the AI slop problem. And it would be so simple to implement: Just some intelligent prompting or even static and dynamic analysis to detect re-occurring patterns in code and execution paths.
It's not the fault of the "code editor developers." The underlying technology, GenAI, is non-deterministic. When you add input that varies widely in quality (prompts, existing code) what you describe is actually quite difficult.
> And it would be so simple to implement: Just some intelligent prompting or even static and dynamic analysis to detect re-occurring patterns in code and execution paths.
So just do it, then? Or are you just as incompentent yourself?
> doing exactly the right refactorings
Are you aware of any LLM models that can do that?