The reason is that it will allow companies to increase their outputs. Think of like CISC vs RISC processors, on one hand we have experienced developers creating really good software while on the other hand we have inexperienced developers using AI creating mostly shitty code.
The former is what you want, but it is expensive and really hard to find. The latter is cheap and easy to find. And that is why I think a lot of companies will come to the conclusion that hiring a lot of inexperienced developers makes financial sense.
Of course this is not sustainable in the long run, but I think people will need 5-10 years to really understand that. And when this finally happens we will probably see a new "Agile manifesto" to help guide us in this new technology landscape.
And probably the worst part is that until we mature our understanding about good practices regarding the use of AI to write code we will create some of the worst code we will ever seen. Picture this: fast inverse square root[0], but with bad performance, misleading comments and some subtle bugs that are really hard to debug. And don't forget about the tests. We will have LOADS of tests, but most of them will be meaningless and it will make almost impossible to do any meaningful refactor without throwing tens of thousands of lines of code away.
Do you think I'm being overly pessimistic? What are your predictions?
[0] - https://en.wikipedia.org/wiki/Fast_inverse_square_root
I'm not sure I get your analogy here, unless one set of options should be reversed i.e. CISC has been hanging on for the last 30 years only because of all the shitty non-portable code written for it, and layer upon layer of hardware hacks.
Any prediction 10 years into the future about technology will be incorrect one to two years from the date of prediction thus making such predictions is not a good use one one's time or electrons.