It does still need an experienced human to review its work, and I do regularly find issues with its output that only a mid-level or senior developer would notice. For example, I saw it write several Python methods this week that, when called simultaneously, would lead to deadlock in an external SQL database. I happen to know these methods WILL be called simultaneously, so I was able to fix the issue.
In existing large code bases that talk to many external systems and have poorly documented, esoteric business rules, I think Claude and other AIs will need supervision from an experienced developer for at least the next few years. Part of the reason for that is that many organizations simply don't capture all requirements in a way that AI can understand. Some business rules are locked up in long email threads or water cooler conversations that AI can't access.
But, yeah, Claude is already acting like a team of junior/mid-level developers for me. Because developers are highly paid, offloading their work to a machine can be hugely profitable for employers. Perhaps, over the next few years, developers will become like sys admins, for whom the machines do most of the meaningful work and the sys admin's job is to provision, troubleshoot and babysit them.
I'm getting near the end of my career, so I'm not too concerned about losing work in the years to come. What does concern me is the loss of knowledge that will come with the move to AI-driven coding. Maybe in ten years we will still need humans to babysit AI's most complicated programming work, but how many humans will there be ten years from now with the kind of deep, extensive experience that senior devs have today? How many developers will have manually provisioned and configured a server, set up and tuned a SQL database, debugged sneaky race conditions, worked out the kinks that arise between the dozens of systems that a single application must interact with?
We already see that posts to Stack Overflow have plummeted since programmers can simply ask ChatGPT or Claude how to solve a complex SQL problem or write a tricky regular expression. The AIs used to feed on Stack Overflow for answers. What will they feed on in the future? What human will have worked out the tricky problems that AI hasn't been asked to solve?
I read a few years ago that the US Navy convinced Congress to fund the construction of an aircraft carrier that the Navy didn't even need. The Navy's argument was that it took our country about eighty years to learn how to build world-class carriers. If we went an entire generation without building a new carrier, much or all of that knowledge would be lost.
The Navy was far-sighted in that decision. Tech companies are not nearly so forward thinking. AI will save them money on development in the short run, but in the long run, what will they do when new, hard-to-solve problems arise? A huge part of software engineering lies in defining the problem to be solved. What happens when we have no one left capable of defining the problems, or of hammering out solutions that have not been tried before?
AI would need to 1. perform better than a person in a particular role, and 2. do so cheaper than their total cost, and 3. do so with fewer mistakes and reduced liability.
Humans are objectively quite cheap. In fact for the output of a single human, we're the cheapest we've ever been in history (particularly in relation to the cost of the investment in AI and the kind of roles AI would be 'replacing.')
If there is any economic shifts, it will be increases in per person efficiency, requiring a smaller workforce. I don't see that changing significantly in the next 5-10 years.
That said, in the meantime, I'm not confident that I'd be able to find another job if I lost my current one, because I not only have to compete against every other candidate, I also need to compete against the ethereal promise of what AI might bring in the near future.
- talking to people to understand how to leverage their platform and to get them to build what I need
- work in closed source codebases. I know where the traps and the foot guns are. Claude doesn’t
- telling people no, that’s a bad idea. Don’t do that. This is often more useful than an you’re absolutely right followed by the perfect solution to the wrong problem
In short, I can think and I can learn. LLMs can’t.
Before this I was a JavaScript developer. I can absolutely see AI replacing most JavaScript developers. It felt really autistic with most people completely terrified to write original code. Everything had to be a React template with a ton of copy/paste. Watch the emotional apocalypse when you take React away.
So no, I'm not worried.
I have a few co workers who are deep into the current AI trends. I also have the pleasure of reviewing their code. The garbage that gets pushed is insane. I feel I can’t comment on a lot of the issues I see because there’s just so much slop and garbage that hasn’t been thought through that it would be re-writing half of their PR. Maybe it speaks more to their coding ability for accepting that stuff. I see comments that are clearly AI written and pushed like it hasn’t been reviewed by a human. I guard public facing infrastructure and apps as much as I can for fear of this having preventable impacts on customers.
I think this is just more indicative that AI assists can be powerful, but in the hands of an already decent developer.
I kind of lost respect for these developers deep into the AI ecosystem who clearly have no idea what’s being spat out and are just looking to get 8 hours of productivity in the span of 2 or 3.
what’s your plan when today’s ai functionality costs 10000x more?
Sooooo... no.
(Also, look at what the smart guys/gals who found this topic before me said about profits vs income etc.)
I predicted commoditization happening back in 2016 when I saw no matter what I learned, it was going to be impossible to stand out from the crowd on the enterprise dev side of the market or demand decent top of market raises.[1]
I knew back then that the answer was going to be filling in the gaps with soft skills, managing larger more complex problems, being closer to determining business outcomes, etc.
I pivoted into customer facing cloud consulting specializing in application development (“application modernization”). No I am not saying “learn cloud”.
But focusing on the commodization angle. When I was looking for a job in late 2023, after being Amazoned, as a Plan B, I submitted literally 100s of applications. Each open req had hundreds of applicants and my application let alone resume was viewed maybe 5 times (LinkedIn shows you).
My plan A of using my network and targeted outreach did result in 3 offers within three weeks.
The same pattern emerged in 2024 when I was out looking again.
I’m in the interviewer pool at my current company, our submitting an application to job offer rate is 0.4%.
[1] I am referring to the enterprise dev market where most developers in the US work
You could be made unemployable even without AI, all it takes is a bit of bad luck.
This fear of AI taking over your job is manufactured.
However, that worry is replaced by the fear that so many people could lose their jobs that a consequence could be a complete collapse of the social safety net that is my only income source.
1) Nearly all the job losses I've dealt with was when a company runs low on money. This is because it cost too much/too long to build a product or get it into market.
2) LLMs are in the sweet spot of doing the things I don't want to do (writing flawless algorithms from known patterns, sifting through 2000-line logs) and not doing the sweet spot of doing what I'm good at (business cases, feature prioritization, juice). Engineering work now involves more fact checking and "data sheet reading" than it used to, which I'm happy to do.
3) Should programming jobs be killed, there will be more things to sell. And more roles for business/product owners. I'm not at all opposed to selling the things that the AI is making.
4) Also Gustafson's Law. All the cloud stuff led to things like Facebook and Twitch, which created a ton more jobs. I don't believe we'll see things like "vibe code fixer". But we'll probably see things like robotics running on a low latency LLM brain which unlocks a different host of engineering challenges. In 10 years, it might be the norm to create household bots and people might be coding apps based on how they vacuum the house and wipe the windows.
5) I don't take a high salary. The buffer between company profit and my costs is big enough that they don't feel the need to squeeze every drop out of me. They make more profit paying both me and the AI and the colleagues than they would just paying the AI.