I saw a few discussion about the impact of AI on developers on HN. It seems, while not a consensus, a lot of folks here think this LLM thing is overblown and will somehow go away or the AI progress will not be as bad as everyone is trying to make it to be. While others are pointing out that the job of a developer is not all about coding, but gathering requirements, understanding customer needs and then coding. So nothing to worry about here.
Then I was reading other forums such as the VFX subredit and a lot of people are worried and agreeing that the field is dying a slow death. New people who studied VFX or similar are looking for career change. While veterans are seen it coming and take solace that they are not far away from retirement age.
Am I reading too much into this? Is the recent progress in AI just the next Bitcoin/IPv6/3D TV's..etc? or do we need to prepare for the worse right now?
The comparison to bitcoin is apt, in that the blockchain (not quite the same thing but related and part of the same hype cycle) is actually a useful algorithm, for certain applications. Way, way fewer applications than was believed at the peak of the hype cycle.
The thing is, how one prepares for wider use of LLM's, would be to make sure you're doing the things that LLM's are bad at, which is:
1) being honest
2) knowing the fundamentals of how the system you're programming for works, instead of just regurgitating some code you found elsewhere
3) try to make the simplest code that gets the job done, rather than find an excuse to use the latest buzzword to make things complex
This is all not only what would help to distinguish you from what an LLM would produce, it is also all good stuff to do regardless of whether or not LLM's flame out like many hype cycles before, and we enter a third "AI winter".
By the way, if you're not familiar with the term "AI winter", now would be a good time to look it up.
Over the past few weeks I’ve used Copilot to teach me how to create some fairly complex process automations. That probably took a job away from a programmer, or maybe the task wouldn’t have gotten done at all. I’m not sure.
However, we are about one major breakthrough away from total game change. How long before that happens?
The previous incarnations of AI hype have been so oversold as to lead to the previous "AI winters." There's real progress this time too, but it's incredibly hard to reason about or criticize because its so heavily covered in marketing hype and scammer scrap. After a few years the real advances will crawl out into the world and be seen for the incremental, useful tools they are.
So, speaking for myself; I'm not in denial; I have hopes: I'm just cynical about the eventual realities and deeply cynical about the time-frames in which they will develop.
I’m also impressed that a car can mostly drive itself these days and only rarely veer into oncoming traffic, and yet humans still drive the vast majority of all vehicles.
Basically, I’ll believe it when I see it.
I have seen HN opinions about what we're calling AI this year range from fear of the robot overlords, to wide-eyed acceptance of the hype and reverence for Sam Altman, to wait and see, to skepticism and outright dismissal of the whole thing as another grift run by VCs. So opinions range all over the place, as does the reasoning expressed to support them.
No one knows what will happen, other than the obvious: Lots of money will get poured into AI, and the technology will get deployed (almost certainly prematurely because of FOMO), and pushed on us whether we want it or not. That's already happening. Whether the money and the hype will actually accomplish the stated goals of either AGI or vastly improved productivity, who knows. So far we don't even have full self-driving or LLMs that don't just make things up.
Some people, like Sam Altman, seem to think AGI or at least productive uses superior to humans comes down to scaling LLMs up. Other people think that LLMs will plateau, reaching a limit of capabilities inherent in how they work and increasingly polluted and "photocopy of a photocopy" training data as future LLMs train on LLM-generated content of questionable quality. Follow the money and that very likely points to who stands to profit from LLM hype and adoption.
I've worked in IT as a developer and system admin for over four decades, so I remember plenty of previous hype cycles, including the second "AI winter" (the first AI winter happened when I was still in high school). And I've survived multiple apparent threats to my career future, mainly offshore outsourcing, but also things like no-code/low-code, trends that stoked a lot of worry but either fizzled out or caused more of a ripple than a tsunami. I agree with the people who have pointed out, on HN and elsewhere, that writing software requires a lot more than learning a programming language and cobbling together snippets gleaned online, though a lot of people get jobs in the industry with those skills and little else (or used to).
Some specific fields look more vulnerable than others. News and "content production" have already taken a hit. I wouldn't tell my kids to go into transcription, translation, paralegal, or even law at this point. Maybe avoid rote medical jobs like looking for anomalies in lab and imaging results. VFX looks vulnerable, but that field got computerized a long time ago. For everyone amazed at Sora's videos (which I admit seem impressive), consider we've had Pixar-style movies and photorealistic video games for quite a while without anything we might call AI. We could experience significant hollowing out of some industries, very similar to what happened to auto manufacturing, and manufacturing in general, due to both automation and offshoring.
Because I write code for a living I like to think that won't get taken over by LLMs in my lifetime. I've looked at the current crop of tools, I'm not worried, but junior developers might consider the impact of those tools on their prospects. I will retire when I can't buy a keyboard that doesn't have a copilot key on it.