Morning gravy train discussing AI, here is my opinion
As great as a nicely formed prompt to a llm and its output may be, how much actual productivity are we talking compared with an experienced internet stackoverflow searcher with multitudes of completed projects in their experienced completed code archive to search on how to they did things, or perhaps those thing once known as books. Suddenly we go from the 3x to 4x to 1.5x to 2x factoring in prompt creation time and its tweaking, for ok to great llm output. It's not that llm code is lengthy, its that an expert, one that is professional must know the code to almost the point of memorization to at least total understanding of it.
Code maintainability and readability: I code by the mantra of can this be read and understood 6 months from now and quickly be modified, added to, and improved upon.
Of Books: It's almost out of a James Bond movie that books of quality code from books aren't considered anymore (the hype being llm output) because as if its a American rival, e.g. Chinese, plot to remove the American expert as the book author of the containers of such programming knowledge.
After all, China owes America for saving it from 1960s 1970s famine and lack of jobs because of its oligarchical status quo at the time. Sadly, America is now in its own form of oligarchical status quo and may need the saving that China was afforded to.