HACKER Q&A
📣 johnwheeler

Are you afraid of AI making you unemployable within the next few years?


On Hacker News and Twitter, the consensus view is that no one is afraid. People concede that junior engineers and grad students might be the most affected. But, they still seem to hold on to their situations as being sustainable. My question is, is this just a part of wishful thinking and human nature, trying to combat the inevitable? The reason I ask is because I seriously don't see a future where there's a bunch of programmers anymore. I see mass unemployment for programmers. People are in denial, and all of these claims that the AI can't write code without making mistakes are no longer valid once an AI is released potentially overnight, that writes flawless code. Claude 4.5 is a good example. I just really don't see any valid arguments that the technology is not going to get to a point where it makes the job irrelevant, not irrelevant, but completely changes the economics.


  👤 uberman Accepted Answer ✓
I use Claude 4.5 almost every day. It makes mistakes every day. The worst mistakes are the ones that are not obvious and only by careful review do you see the flaws. At the moment, even the best AI cant be reliable event to make modest refactoring. What AI does at the moment is make senior developers worth more and junior developers worth less. I am not at all worried about my own job.

👤 diamondap
I think AI will substantially thin out the ranks of programmers over the next five years or so. I've been very impressed with Claude 4.5 and have been using it daily at work. It tends to produce very good, clean, well-documented code and tests.

It does still need an experienced human to review its work, and I do regularly find issues with its output that only a mid-level or senior developer would notice. For example, I saw it write several Python methods this week that, when called simultaneously, would lead to deadlock in an external SQL database. I happen to know these methods WILL be called simultaneously, so I was able to fix the issue.

In existing large code bases that talk to many external systems and have poorly documented, esoteric business rules, I think Claude and other AIs will need supervision from an experienced developer for at least the next few years. Part of the reason for that is that many organizations simply don't capture all requirements in a way that AI can understand. Some business rules are locked up in long email threads or water cooler conversations that AI can't access.

But, yeah, Claude is already acting like a team of junior/mid-level developers for me. Because developers are highly paid, offloading their work to a machine can be hugely profitable for employers. Perhaps, over the next few years, developers will become like sys admins, for whom the machines do most of the meaningful work and the sys admin's job is to provision, troubleshoot and babysit them.

I'm getting near the end of my career, so I'm not too concerned about losing work in the years to come. What does concern me is the loss of knowledge that will come with the move to AI-driven coding. Maybe in ten years we will still need humans to babysit AI's most complicated programming work, but how many humans will there be ten years from now with the kind of deep, extensive experience that senior devs have today? How many developers will have manually provisioned and configured a server, set up and tuned a SQL database, debugged sneaky race conditions, worked out the kinks that arise between the dozens of systems that a single application must interact with?

We already see that posts to Stack Overflow have plummeted since programmers can simply ask ChatGPT or Claude how to solve a complex SQL problem or write a tricky regular expression. The AIs used to feed on Stack Overflow for answers. What will they feed on in the future? What human will have worked out the tricky problems that AI hasn't been asked to solve?

I read a few years ago that the US Navy convinced Congress to fund the construction of an aircraft carrier that the Navy didn't even need. The Navy's argument was that it took our country about eighty years to learn how to build world-class carriers. If we went an entire generation without building a new carrier, much or all of that knowledge would be lost.

The Navy was far-sighted in that decision. Tech companies are not nearly so forward thinking. AI will save them money on development in the short run, but in the long run, what will they do when new, hard-to-solve problems arise? A huge part of software engineering lies in defining the problem to be solved. What happens when we have no one left capable of defining the problems, or of hammering out solutions that have not been tried before?


👤 nness
Largely, no.

AI would need to 1. perform better than a person in a particular role, and 2. do so cheaper than their total cost, and 3. do so with fewer mistakes and reduced liability.

Humans are objectively quite cheap. In fact for the output of a single human, we're the cheapest we've ever been in history (particularly in relation to the cost of the investment in AI and the kind of roles AI would be 'replacing.')

If there is any economic shifts, it will be increases in per person efficiency, requiring a smaller workforce. I don't see that changing significantly in the next 5-10 years.


👤 cjs_ac
The AI providers' operations remain heavily subsidised by venture capital. Eventually those investors will turn around and demand a return on their investment. The big question is, when that happens, whether LLMs will be useful enough to customers to justify paying the full cost of developing and operating them.

That said, in the meantime, I'm not confident that I'd be able to find another job if I lost my current one, because I not only have to compete against every other candidate, I also need to compete against the ethereal promise of what AI might bring in the near future.


👤 wrxd
As much as I would like my job to be exclusively about writing code, the reality is that the majority of it is:

- talking to people to understand how to leverage their platform and to get them to build what I need

- work in closed source codebases. I know where the traps and the foot guns are. Claude doesn’t

- telling people no, that’s a bad idea. Don’t do that. This is often more useful than an you’re absolutely right followed by the perfect solution to the wrong problem

In short, I can think and I can learn. LLMs can’t.


👤 benoau
I think people should be very afraid: the jobs are only safe if it peaks in adoption and stops improving, but it shows no signs of slowing.

👤 gitgud
No, managers don’t want to be using Claude Code… tools change

👤 bhag2066
Denial is the first stage

👤 austin-cheney
I am in management of enterprise API development. AI might replace coders but it won’t eliminate people who can work between teams and make firm decisions that drive complex projects forward. Many developers appear to struggle with this and when completely lost they look to waste effort on building SOPs instead of just formulating an original product.

Before this I was a JavaScript developer. I can absolutely see AI replacing most JavaScript developers. It felt really autistic with most people completely terrified to write original code. Everything had to be a React template with a ton of copy/paste. Watch the emotional apocalypse when you take React away.


👤 charlie-83
None of the models currently are able to make competent changes to the codebases I work on. This isn't about them "making mistakes" which I have to fix. They completely fail to the point where I cannot use any of their output except in the simplest of cases (even then it's faster to code it myself).

So no, I'm not worried.


👤 daringrain32781
I don’t think so, here’s why:

I have a few co workers who are deep into the current AI trends. I also have the pleasure of reviewing their code. The garbage that gets pushed is insane. I feel I can’t comment on a lot of the issues I see because there’s just so much slop and garbage that hasn’t been thought through that it would be re-writing half of their PR. Maybe it speaks more to their coding ability for accepting that stuff. I see comments that are clearly AI written and pushed like it hasn’t been reviewed by a human. I guard public facing infrastructure and apps as much as I can for fear of this having preventable impacts on customers.

I think this is just more indicative that AI assists can be powerful, but in the hands of an already decent developer.

I kind of lost respect for these developers deep into the AI ecosystem who clearly have no idea what’s being spat out and are just looking to get 8 hours of productivity in the span of 2 or 3.


👤 nacozarina
it seems like a good thing when the ai service is being subsidized and sold to you for 1/10000 what it costs

what’s your plan when today’s ai functionality costs 10000x more?


👤 firefax
I remember similar discussions back when it was called "machine learning".

Sooooo... no.

(Also, look at what the smart guys/gals who found this topic before me said about profits vs income etc.)


👤 raw_anon_1111
There are two competing issues. AI and commoditization. AI is just making the problem worse.

I predicted commoditization happening back in 2016 when I saw no matter what I learned, it was going to be impossible to stand out from the crowd on the enterprise dev side of the market or demand decent top of market raises.[1]

I knew back then that the answer was going to be filling in the gaps with soft skills, managing larger more complex problems, being closer to determining business outcomes, etc.

I pivoted into customer facing cloud consulting specializing in application development (“application modernization”). No I am not saying “learn cloud”.

But focusing on the commodization angle. When I was looking for a job in late 2023, after being Amazoned, as a Plan B, I submitted literally 100s of applications. Each open req had hundreds of applicants and my application let alone resume was viewed maybe 5 times (LinkedIn shows you).

My plan A of using my network and targeted outreach did result in 3 offers within three weeks.

The same pattern emerged in 2024 when I was out looking again.

I’m in the interviewer pool at my current company, our submitting an application to job offer rate is 0.4%.

[1] I am referring to the enterprise dev market where most developers in the US work


👤 shahbaby
Let's put things into perspective.

You could be made unemployable even without AI, all it takes is a bit of bad luck.

This fear of AI taking over your job is manufactured.


👤 mikewarot
Having been yeeted out of the labor market by long covid, my worries about my own employment are settled.

However, that worry is replaced by the fear that so many people could lose their jobs that a consequence could be a complete collapse of the social safety net that is my only income source.


👤 nalllar
Somewhat.

👤 muzani
If anything, I feel it makes my career more secure.

1) Nearly all the job losses I've dealt with was when a company runs low on money. This is because it cost too much/too long to build a product or get it into market.

2) LLMs are in the sweet spot of doing the things I don't want to do (writing flawless algorithms from known patterns, sifting through 2000-line logs) and not doing the sweet spot of doing what I'm good at (business cases, feature prioritization, juice). Engineering work now involves more fact checking and "data sheet reading" than it used to, which I'm happy to do.

3) Should programming jobs be killed, there will be more things to sell. And more roles for business/product owners. I'm not at all opposed to selling the things that the AI is making.

4) Also Gustafson's Law. All the cloud stuff led to things like Facebook and Twitch, which created a ton more jobs. I don't believe we'll see things like "vibe code fixer". But we'll probably see things like robotics running on a low latency LLM brain which unlocks a different host of engineering challenges. In 10 years, it might be the norm to create household bots and people might be coding apps based on how they vacuum the house and wipe the windows.

5) I don't take a high salary. The buffer between company profit and my costs is big enough that they don't feel the need to squeeze every drop out of me. They make more profit paying both me and the AI and the colleagues than they would just paying the AI.