For myself, everytime I try to use chatgpt it completely fails to be helpful, for the same reason that stackoverflow also fails: the problems I have as a senior are too specific or too complex. Every single time, querying an LLM ends up being a waste of time.
Does it mean that I can do 10/20% more things? No, I likely do the same amount of tasks, but I feel the quality improved because I can save brain cycles for those things LLMs are not good enough at.
With productivity being measured as what, exactly?
The Venn diagram of "discussions about LLMs" and "Idea Guys Talking" is close to a circle in my experience.
The code isn't always the most desirable one but code review catches that.
On the other hand, this puts a larger burden on seniors who have to review subpar code on a faster pace now.
Dubious, at best, claim
My issue with the LLMs is not that "it will replace" your expensive staff. The problem/challenge imho is not how to produce 1000 lines of code. Yes I can do that myself with LLM even now.
The real challenge is that business need people that understand the business as well. You've most likely all had arguments in your workplaces with people that make a code suggestion, that 'seems right' at the time but in your core you feel that this will be just a technical debt, or will create some security issue in the future.
So cutting down from (e.g.) 10 experienced devs to 2, to save money, will come at the cost of having the business coming back to you asking for more & different things as the direction/strategy is different.
I kept on top of what's happening in this space and found ways to improve my work, but I doubt the improvement crosses 10%. This place will have a skewed statistics of people who can use LLMs efficiently - outside, in real world I don't think the impact is noticeable at all yet.
1. A starting point for a problem I've no experience with - "How do I set up replication on a database". I won't follow it blindly, but it gives me a starting point to search online. 2. Helping me put together proposals and documentation. It's great at setting up an outline for things or rewriting my badly written things. 3. Writing regex
As for impacting jobs specifically, I havn't found any impact, yet. If anything, I've seen companies either put down blanket bans of using AI (for fear of people imputting sensitive data), outright banning the URLs on the VPN, or putting very strict policies in place with how they can be used.
This was my first app in WPF. Huge learning curve, Copilot has been indespinsible.
Ai can still help a ton with the predictable and routine code, but it sucks at hard code. That gets even more true the more niche the application is. I would expect it is making an impact in applications that are crud, but anything with some amount of depth is going to require the same amount of human to power it.
This is what happens with every improvement in software, doesn't matter if its better hardware, improved tooling, increased libraries or simple more programmers. The increased expectations always create more, not less demand for programming. The only thing that a change in tooling does is a change in skills that are in demand and features that are demanded.
The juniors probably are most helped by these tools. I find the benefits a mixed bag. Even as a better autocomplete it (github copilot) frequently makes the most trivial grammatical errors such as unmatched parenthesis, which a 'dumb' autocomplete would never produce. And sometimes the code looks so good, it is easy to overlook that one insidious semantic error and is now costing you debugging time.
I won't be replaced by AI, but I might be replaced by a younger dev who is able to get more value out of these newfangled tools than me.
From what I've seen the productivity boost is negligible and might even be negative. The developers I've seen who claim a productivity boost seem to discount all the times it leads them astray. This needs to be deducted from the productivity gains driven by getting the odd snippets of code a few minutes faster. I know that most of the times I've asked questions I couldn't trivially google it gave me bullshit answers.
It's ironic, really. However, it just goes to show that the mainstream corporate media is very, very good at spinning a narrative out of fiction. Even junior devs are convinced by this narrative.
The big risk for job prospects for devs is not AI it is tech industry consolidation: that is, Microsoft, Amazon and Google growing their competitive moat and swallowing or destroying startup competition. The more secure they feel in their market position the more likely they will be to swap their workforce with cheaper, lower quality workers. This is what happened to Detroit in the 1950s and why it went from a thriving middle class city with tens of thousands of auto industry SMEs to a desolate wasteland run by 3 vertically integrated companies who conspired to strangle all startup competition.
Anything that current LLM's can do code-wise is overshadowed for me with what they can do product, design, marketing-wise. If LLM's actually break the productivity ceiling why would any developer bother working for anyone other than themself? Having a job and working for someone else just disproportionately improves their wealth while limiting yours.
If this turns out to be the case I'd expect to see unprecedented micro-software shops spring up overnight. Why bother burning yourself out working for a FAANG or F1000 when you could make more and be in control of your own destiny and happiness? The rise of entrepreneurship should follow actual increases of LLM productivity across the board.
Most people in here fail to understand how broad and varied the software industry and culture has become. This place, like every community on the internet is a bubble.
So, if you're talking about basic CRUD in Java/Python/Node done remotely from 3rd world countries and Eastern Europe for companies not directly in the technology or finance sectors (e.g.: retail, services), then the response is a resounding yes. People in Poland, Brazil and India are certainly using LLMs to do faster what they did before: spitting code they don't understand copied from StackOverflow.
True, it is most of times bad code. But anyone that hires from 3rd world countries is not overly concerned about code quality.
1. Many companies are hiring "AI" engineers. My guess is 90% of these jobs are virtue signaling to investors and those positions will go to folks that are good at appearing competent in interviews. Yay! More overpaid incompetent colleagues, just what we need.
2. My editor saves me about 2 minutes a day with smart printf/loop/variable completions (JetBrains editors -- no sarcasm, i like this!)
3. I am wasting time responding to email from PMs suggesting that "we don't have engineering capacity to do XYZ, but maybe we can use an AI to do it???"
(I am not anti-GenAI -- I've used it to create flyers and do pretty cool stuff!)
This reaction is quite harsh and emotional. "If you think you can be replaced by AI, it means you are a shitty developer" is quite popular). This talks more not about LLMs, but about our own insecurities.
Yes, you CAN be replaced by either AI, or any other technology shift, or by younger more productive developers, or simply by market forces rulling your skills out of favour. It happened before, it'll happen again.
I've tried using them myself, but they end up sapping more of my time than they save because of all the dead ends they send me down with plausible sounding bullshit. Things that use real terms, but incorrectly. I basically treat LLM output like that one guy who doesn't know anything except the existence of a bunch of technical terms and who throws those terms around everywhere trying to sound smart. It might be nice to know that a term exists if you're unfamiliar with the topic, but only to go look up what it actually means elsewhere.
I don't think we'll see much of an impact of LLM-generated code until these systems are trained on the code and the existing user and dev documentation of the project itself.
As for jr. engineer impact and prospective candidates, I'd say virtually zero.
When I fix bugs, it's usually not helpful because I need to debug and track down where the bug is.
When I develop new features, it occasionally uses the wrong lock, or makes up APIs that don't exist. I find it gets in the way more for development.
For C# and .NET core, I found IntelliCode to be pretty useful.
I only have anecdata to share. My coworkers and friend seem to be going through the disillusionment phase and finding LLMs as a better (if mildly outdated) search engine and a helper for simple well-known tasks. I guess the 10% productivity improvement makes sense from what I've seen.
I've also met company owners that thought they could reduce their workforce drastically because of LLMs. I can only wish them good luck, it's going to be bumpy for them once they realize the mess they will be in (e.g. spending more time troubleshooting systems their engineers never understood in the first place).
TL;DL; No, except for places you wouldn't want to work at.
Also people sending a barebnes chatgpt cover letter when it is optional to do so.
Maybe other companies did manage to squeeze real help in more complex situations from it or maybe gpt 4.5 is much better than copilot (I tried to use public chatgpt to detect some threading bug and still it couldn't find it, until in the end I found it by myself), but at least for us the xp with using copilot wasn't that stellar
A good poet says complex things using few words. A bad poet is someone who conveys something simple using a lot of words.
My guess is that, by this time next year, the vast majority of people and companies currently enthusiastic about generative AI will be pretending like they never had anything to do with it, a small hardcore of true believes excepted. The hype cycle will then begin anew with something else.
So far no, but in the future with more specific and enterprise suitable tooling - likely