5 decades: probably not.
Why do I think this? The same thing happened to the field of electrical engineering. There were 426,000 electrical engineering jobs in 1990 [1]. In 2002, there were 385,000 jobs. By 2012, the number was 300,000. [2]. And the numbers have continued to decline. (Note: all U.S. numbers)
So, aren’t EE grads still getting jobs? Yes, but they are doing more software now and less electrical/electronic things. They have specialized. ME’s have had a slightly more difficult road.
[1] https://www.bls.gov/mlr/1992/02/art3full.pdf
[2] https://www.computerworld.com/article/2487847/what-stem-shor...
- Determine business needs, translate those into requirements and design specs
- Plan upcoming work
- Develop testing strategies
- Integrate custom hardware/software with other custom hardware/software
- Create and maintain documentation
- Handle emergency situations
- Communicate intelligently with peers/managers about the state of their work
- Mentor new employees
Will some of these tasks be partially or fully automated? I hope so (looking at you documentation). But it's gonna take a while to automate every portion of every SWE job. And someone's gonna have to figure out how to automate those jobs anyway, and who better than another SWE?
One thing I am concerned about, is that current AI/LLMs are as good as a lot of junior developers. My pair programming sessions with ChatGPT at my side are as productive as any session I've had with a junior dev (sometimes more productive, sorry to say). It's possible that, contrary to popular belief, the demand for mid-level+ SWEs will go UP as a result of AI - because fewer junior devs can make it past the great AI filter.
Won't be doing exactly what you were doing last year, just as last year, you weren't rewiring plugboards for mainframe programs. It'll be more like systems analysis.
Who knows about "completely" but the actual code generation part of software engineering will likely be handled by AI in the majority of cases within five or so years.
There will still be _some_ humans doing actual programming probably, just not usually for common tasks. Because those will have plenty of data available to handle automatically.
As soon as I found out about ChatGPT last year, I immediately anticipated that I would be competing with AI for work. And the next day started trying to build code generation tools aimed at end users. I am on the third version of my attempt which is primarily a ChatGPT plugin.
My last version could generate, test and deploy some programs based on a chat conversation. But it wasn't very reliable and I didn't have money for marketing. So I am working on a version that has slightly more limited scope but also should be able to make a wide variety of simple applications more reliably through the ChatGPT interface.
I suspect that WWIII (powered by superintelligent killer AI swarms) might turn out be a much more pressing concern for people versus careers.
We need to stop with this obsession of X completely killing Y. It’s the mindset that leads people to argue “Google won’t exist in one year” after ChatGPT has been out for one month (I saw a version of that argument multiple times) or that the world’s financial system would have been replaced with Bitcoin by now. At best it’s detached from reality and devoid of critical thinking, but it’s just as likely to be a grift that people with a financial stake in the technology want you to believe.
The world isn’t static. Stop looking at it through the lens of a single element and fantasising about that thing continuing to develop as everything else in the world remains unchanged.
This analysis is, of course, ignoring the other side of the equation, which is "what new jobs will be created in turn by the advent of these AI platforms"?
On the other hand I still see massive oceans of CVE's out there. So clearly LLMs/AGI's/Whatever have not fixed all of the human induced bugs and that should be the easiest things for super smart AI to fix I think. That suggests to me it has not caught up to humans. When I see all the CVE's closed out as [FIXED] or [FIXED FOREVER, HACK ME BRO, JUST TRY IT] then perhaps I might be concerned for developers. Yes I expect those theoretical AGI's to get snarky in git commit comments.