Is their a field that has relative job security in computer science that shouldn’t be impacted by similar AI products?
I know people will mock me for having these thoughts but I am only 3 years into my career so I already didn’t have the confidence some of the developers on this forum have.
Some initial thoughts include pivoting to work on lower level stuff like OS. I am highly motivated and am willing to put on hours. I just fear Im entering a race that I have no odds of finishing.
The Luddites said first, "this tech will make the whole factory more productive, let's keep everyone on board and split the gains between the owners and the workers. It's a win win."
The factory owners said, "no, I can keep 100% of the gains if I fire you all and hire unskilled workers."
Engineers today are looking at AI that's close to producing non toy code. We are in the same spot as the Luddites - but we have the ability to see it coming.
The answer is to consider: unionizing, starting or joining workers co-ops, fighting for legislation that takes care of workers basic needs (eg universal basic income) so displaced workers aren't made destitute, etc.
I'd suggest the opposite, move up rather. What's difficult for a computer is what's been difficult for humans since the invention of programming, turning a humans "I have this problem" into a concrete program that solves that specific problem in a good way.
What's hard in writing software is not the writing of software but that the software solves the right problem in the right way. If you aim to do this, I don't think computers will be able to do a better job than humans for a long time.
Its going to have limited knowledge of internal company APIs and classes -- especially poorly designed ones. To use it effectively, as a software engineer, you need to grok the code to even write a prompt.
In that sense, since 70% of software is maintenance (rather than greenfield), most software engineers will be ok.
However, writing software is one of the most creative acts out there. It's like writing a novel or a screenplay. Taking fuzzy business requirements and distilling that down into software is not something you can automate. When those fields fall to AI it's because general AI is here and humans have fallen to AI. At that point you've got bigger problems. Programmers are not going away, and demand for engineers still way exceeds supply. Demand for GOOD engineers is insane.
So learn as much as you can, put in the work, and you'll have an amazing career. Don't fret about AI.
I've worked on side-projects on evenings and weekends for 20 years. At this point I've written more code in more languages than most software engineers will in their entire careers. When I changed jobs back in March, I had a 2:1 ratio of jobs applied for to offers. It was amazing, I was able to choose between 8 offers. I'm still early in my career.
There's no substitute for putting in the work.
I am also fairly convinced that AI will create more work opportunities for humans, it's just that the kind of work will change. (some people predicted that automation would replace human labour, while in reality work shifted from manual labour workers to 'knowledge workers')
Also when AI gets even more powerful more precision and domain knowledge is required to express the requirements to get the desired result. It will also be very costly to discover an imprecise requirement or query of an AI after using the results of it for over a decade.
If you don't believe that (which I'm in this camp and work in ML) then anything creative. Which also means a lot of things won't be taken over by AI. BUT that doesn't mean AI can't change the field significantly. For example, a few years from now AI might write quick routines for you. You could program latex and the AI would fit the picture for you in the place you want. Research is probably going to be the most long proof and we might even be symbiotic at HLI.
On a slightly different note, I think there's a relevant question that I like to ask people (and they don't like to answer). What do you do with a society where 10% of your workforce is unemployable because automation. The question is about the transitionary period to post scarcity. Transitions are rough and I think it is really clear that this possibly comes about in our lifetime.
The people closest to a mop handle will stay employed the longest. If not physical, then digital.
At the end of the day it’s not about the code shapes, the languages we make, it’s about correct machine state coupled to a context.
Eventually we’ll have a deduplicated data model of sufficient detail and the algorithms that can take a context and render it visually or audibly.
Networked bootstrapping, updating, and healing of the model will be the norm. There will be a hardware I/O kernel and the AI to sample a model with.
This is going to happen because, similar to no one having an obligation to past religious traditions, there is no obligation to your past computing traditions.
It’s going to happen because having programmers recreate code shapes to fidget with machine states is wasteful engineering practice.
Society learns and moves on. It does not sit still and babysit the sensibilities of its past.
Automating engineering is good engineering because it removes complexity and redundancy.
Reality does not care about our old philosophy. We have to be prepared to adapt to reality.
Many will think me dismissive, but even with the cool stuff it can do, gpt is more like a probabilistic interface into stack overflow than a replacement for a developer.
And if a developer that worked for me ever said they got gpt to write part of our code, they would be fired (with discussion and warnings first and whatnot). I suspect a lower-end market will emerge where you can get someone to cobble together some chatbot written monstrosity for you at a discount, but we're a long way from developers being out of the loop.
Chatgpt will become a core part of my workflow, a kind of pair-programmer on steroids. I'll bounce ideas off it, check for refactored implementations, and use it as a sounding board.
Last night it taught me about k-nn and cosine similarity so I could generate videogame recommendations for my site. It was the first time I heard about k-nn and cosine similarity. So now I can go online and learn about these concepts and apply them to build something. It did this in about 10 minutes.
Adapt and elevate yourself and your work.
Or don't and go the way of the Zend PHP Certified Engineer.
Option 2: Get close to the client. They're not able to clearly express a spec to us, they're not going to start doing so to a bot on their own.
Actually, I'm not that worried. My generation had to install OSes before doing anything, the next didn't install their software but at least they had access to chrome's F12. Nowadays kids grow in the walled garden of their phones.
Somebody is going to have to keep their hands dirty to keep the systems going, and it's going to be us; which brings me to:
Option 3: Get into security. Who is insane enough to trust ChatGPT to make changes to iptables/ADs/etc?
Otherwise you just have to stop worrying so much about it.
The current state seems like a very useful tool in its beginning, but it will need to be able to grok entire project to be able to jeopardize jobs of even the most junior developers.
I'm looking forward to the day it can do this, and then I'll be able to tell it: "write unit tests for this function" :-)
For example - how about becoming an AI programmer?
And there's also plenty of free "big data" stuff online to play around with. (Ultimately AI is going to have to train on large pools of data -- and someone's going to have to set up those training pools...)
Then I thought, isn't that happening already? Hasn't somebody tried applying an AI to designing an AI?
What evidence are you basing your conviction on?