You can use it for all kinds of roadmaps and features that have been done before, but the same has always been true for "frameworks".
I think the next iteration will be an AI Model connected to an actual relational database (yes a classic one) to store factual information and to have it "understand" things by relations. Technically its always been possible to use world scale databases with unfathomable amount of information spread out across continents and that we as human agents are also simply "linking" relational info to each other. AI will be much better in automatically creating atomic database tables and queries because it understands them at another level than we do. This can also done with "trust scoring" on where the information comes from, how often you heard it from which source with another trust score and so on and so on.
This is why i am excited about the openai microsoft partnership, because it will mean that we will get to see OUR data incorporated into high end models. That is for me when the beta of chatgpt really begins.
And what this has to do with coding:
You can replicate amazing software and create novel software too, but right now its all a good luck game where you might just figure out that half the functions you where supposed to import, do not exist. My believe this is because it does not actually understand code, but translates it to a human readable format and writes a story for you which is then translated back, based on probability.
If it would know how a kubernetes service and deployment actually relate together and which properties of one influence the other, that's when this is going to reach amazing levels =)
That means humans need to be in the loop, and programming languages are the best abstractions we’ve yet created for unambiguously translating business requirements into executable logic. Seems to me like that remains every bit as necessary.
I'd argue that it is likely significantly harder for current models due to the sheer number of instructions involved in machine code which is an issue for LLMs due to the their limited window / span via tokens.
What we are seeing with chatGPT is that the model is actually inventing its own abstractions, which - imho - suggests that going up in abstractions instead of down will enable higher productivity for the models.
On one hand, LLMs are very versatile in what they can produce, but otoh that versatility results delusions. This is, imho, akin to alphaGo when it made that single error in the match with Lee Sedol.
We will have to correct it for years while leveraging it's speed for shell scripts, text files, and customized boilerplate, until it stops hallucinating.
Pretty sure we tried this, a few times now, got scared at the results, and unplugged the power source.
Human in the loop. For humanity's sake. Please.
Nope, we didn't, and therein lies the problem.
What we did do is make it appear as if it understands human language, but there are numerous examples across the web to show how it fundamentally does not have any understanding of what it's saying.
I am not an ML/AI engineer but this is most likely much easier said than done and probably more into "true AI" territory. (?)