My question is, what is a programming language for at this point? As a human programmer, I feel like I would need a way to verify what the machine has written, and programming languages are, by their very nature, the human / machine interface. But they're really only of benefit to the human. This AI could just flip the bits in theory and write machine code. However, how can we easily verify the resulting software? Automated tests? What if we miss something? Will future, AI-written software be a blackbox, or will languages still have their place? If so, will they be a different kind of language to the ones we have now? Will a code-writing AI in 2070 be writing assembly but converting to C for our benefit? Seems unlikely.
So when AI can tease out my clients requirements when they have no clue, or even worse are adamant they know what they need, but are clueless, then I'll be concerned.
And furthermore, try getting regulatory approval in say, the banking or machinery safety sector for a system that you can't explain in detail what it will do for any given circumstance.
This problem of explainability already exists, one of our local banks gave a talk at AI meetup and said they were able to get results with AI/ML that they couldn't roll out because they could not explain adequately to the regulator the exact results it would give in certain circumstances.
I think the wrong question is being asked. Instead of asking "how will a programming language apply," the question should be "how many layers of abstraction are tolerated?"
The amount of abstraction is a kind of cost-benefit analysis. In one scenario, it may be more attractive to forego tooling at a more abstract layer, in favor of using under-girding tooling at a less abstract layer. For example, if a software dev is two or more layers deep in plugins/frameworks/libraries and realizes that working with the stdlib would actually take less time. Or, in your scenario, a company using an AI system to write high-level scripts realizes that working with the high-level scripts directly would actually take less time.
However in another scenario a software dev may realize that abstraction tooling actually saves time versus working "close to the metal." Or, in your scenario, a company may realize that an AI system actually saves time versus having a human write out boilerplate.
> Will a code-writing AI in 2070 be writing assembly... If we're looking into crystal balls for the answer, mine says ~ 50 years is long enough time for drastic changes that don't fit into this conversation at all. But ignoring all that for a minute, the cost-benefit analysis indicates to me that there is plenty of room for bounces and whiplashes. Meaning, a period of lots of AI abstraction, followed by a period of abstraction consolidation/standardization and death.
https://stackoverflow.com/help/gpt-policy
Every step in the evolution of programming has some degree of controversy and I think this is normal. We can read in history books (and maybe first-hand knowledge from a few senior folks on HN) how there was concern about moving from assembly to the first high-level languages. There will probably come a day when AI software is sufficient for common use, though it might take a while.