At the very minimum, you need a human operator who can a) work with stakeholders to gather requirements and express them as unambiguous prompts, b) debug code that they themselves didn't write and c) build a coherent architecture for integrating, packaging, and deploying the code IRL.
In a very tangible sense, it's similar to outsourcing the coding to a consultant who is fast but sloppy and hard to communicate with. They may be quick but it takes considerable effort to clean up after them. No manager would ever put up with a human employee that "behaved" like an LLM (fast delivery with no quality control, outright lies and hallucinations, constantly ignoring subtle aspects of the instructions). LLMs are cheap and fast, but you must compensate heavily for their faults.
Think about the core skills that we identified as table stakes for working effectively with AI: sussing out requirements, exceptional debugging, and a solid sense of software architecture. I think you find those are the core skills of most good software developers! AI merely raises the bar for software professionalism - it's no longer enough to "just write code", you need engineering leadership and project management skills to tie it all together. Those skills are now in higher demand.
The only programming jobs that need to be worried are the mindless code-monkey positions. If your only contribution is the code you type in your editor, yeah AI's taking your job tomorrow. As we flood the market with "code", consider that its value trends toward zero. And since code is but one of the necessary ingredients in making a software system, those other factors rise in value.
I would speculate that a fair number of those prominent people are talking their book and have financial incentives for others to believe that is true and spend money on their ai products.
If people listen to that advice, then we all have nothing to worry about.
So: anything that is impractical to validate in an automatic way, or, better, validate at all, is going to fare better.
Careful, however: many things that do not appear easily validatable actually are! For example, you might not be able to validate (with a GitHub workflow) that users prefer (say) a new UI design, but of course, you can check the design against well-understood usability principles, A/B test, and, soon, probably, user simulation, etc.
So: in the short and medium term, work that requires careful human attention or domain knowledge to validate (e.g. whether or not an interface design or system design or architectural plan is fit-for-purpose) will last longer.
In the long term, I don't think any programming work is safe.
The benefits are uncertain, they might be more or less than before LLMs could produce some code.
We know the cost of learning to code is now much less than before. The cost of learning almost anything is much less than before.
Those who say we should not learn are fools.
It's going to be a while before an AI can take a chip spec and give you a device driver. It's going to be longer before AI can take a buggy chip spec and give you a working device driver.
Still longer before it can give you a whole working product.
Engineering is about asking questions.
Looks like we are at least 50 years out from "AI" replacing entry level engineers. I'll let me grand children worry about that, and use this glorified auto completion until then.
This means that software engineers will need to be more like software designers. They will need to focus on what people need and want rather than being handed a design and having to code it by hand. There will still be a need for software engineers but the way things are done will change. Translating what users need to apps will be much more important.
I suggest you keep up with AI's coding abilities. And you transition to a role where you can speak directly to those people who have ideas they want to translate to computer code. Those apps will include AI. We are starting to see that in the OpenAI store. However, software engineers will be able to put together much more complicated apps.
If you have the skills you might want to create apps that create custom apps for amateurs. Basically, the way I see it, a person opens an app, types in a request, and out will come out a complete app that can be used. On the backend, you can have a set number of apps that will be produced by a bot that will guide users by asking questions to a complete and working app.
We won't get to the point of the user asking for a custom-complicated app and having AI output the complete program 100% of the time anytime soon but we are inching towards it.
The big question is: What apps are needed and how abstract can they be so they fill the needs of the maximum requestors?
I suspect game software will be greatly impacted since you can create many different versions of a game but there is only a set amount of software engineering needed.
So, for the foreseeable future, we will still need engineers but the role requirements will change.
AI is like the internet. Those who make platforms with it will get seriously rich. Those who don't get it all will get left behind economically. The rest of us will adapt to it and use it as a different, and often better, way of working.
*Copilot for VS Code, for example.