Tldr: what would you do if you were an ambitious teenager interested in tech entering a world soon to be dominated by large language models and other generative machine learning algorithms?
Don’t worry if you’re not a math major. You can and should get a degree in mathematics along side an software engineering degree.
AI is a major factor in applied computer science and engineering. However, there is more to compsci than just "AI". It will make you more productive provided you're above the threshold of engineers who will indeed by automated away.
* doesn't "care" if its output is true or false
* is weak at humour
which (for the moment) suggests that there remains a niche for people who have integrity and can laugh at themselves.
(Should that niche disappear in the future, at least one won't have lived in a shameful manner before then:
Losing one glove
is certainly painful,
but nothing
compared to the pain,
of losing one,
throwing away the other,
and finding
the first one again. )
AI can create art or program. It can write text. And in future, it'll be able to make videos and music too.
But it can't BE someone else, not legally at least. A random AI cannot replace a popular celebrity because its their name that matters the most, not the objective quality of their work. It can't outcompete Disney or Marvel or DC or Nintendo at their own game, because their works are protected by IP laws and sell based on their characters and brands as much as the objective quality of the works.
So don't just draw or write or program in a way that you can be replaced. Do so under your own name/brand, so people pick you over the AI copycats based on your personality and image alone. If you become the next Stephen King or George RR Martin or Markiplier, people will check out your work because it's your work, and no amount of tech can ever compete on that front.
> built some pretty fun and cool things like chess engines, video games, 3d renderers from scratch and competed in my national computing contest with a solid placement
You're already ahead of 99% of the folks out there. Learn AI, learn ML, get into that now and you'll remain ahead of the curve.
I don't think AI is going to be quite that ground-breaking for at least another 20 years. It's impressive now, yes, but until it's on every device running under every OS as part of a commercial package, it's not going to disrupt anything.
If anything it'll be like the transition to cloud computing. Jobs will change. The disrupting factor will be that a good portion of data processing jobs will no longer need bodies to fill it. Middle management in IT will be next. Programmers and sysadmins will be safe for some time to come.
For a concrete answer in tech, have a look at AI development itself. Keep abreast of what OpenAI and their competitors identify as open problems and orient your academic career towards working in them. The object level questions will change while you're in school; the goal is an "intercept trajectory" where what you study just before you graduate is the state of the art.
Alternatively, highly regulated professions (medicine, law, civil engineering to name a few) are likely to continue to employ bright humans long after AI can do the job just because no-one will be allowed to use AI there.
I understand that you are being told that AI will make everything obsolete, but remember that you are getting this message through news outlets. These are businesses that make money by getting attention, and nothing gets attention more than making extreme, especially threatening, claims.
The maturity of machine learning will change things, but exactly how will be hard to predict.
Rather than try to optimize for 'success' based on guessing the future, I'd suggest that you focus on continuing to learn and ask yourself what you like.
People are still playing chess and go, and there will still be software.
A job is when someone pays you to add value.
(It's true all the way down to bullshit jobs, where the value you're adding is to meet an uneconomic need.)
It doesn't matter which work models will eliminate. All that matters is building value in a reality where they exist. Just as a casual observation, climate change continues to rage on, energy supply is still a bottleneck, almost half the deaths in high-income countries are from cancer etc. etc. etc.
There will always be value to add. The baseline may change, but no one yearns back to the days before the copying machine or Excel.
In my mind, the most sensible thing an ambitious teenager can do is get a proper academic education (AI is not magic, it's jus the product of research, and all those people doing the research did exactly that - get good education.)
Also, and I realize that doesn't gel with the whole HN ethos but whatever, don't waste your early working years starting a company. The only way to discover real problems in the world are to live in that world. Getting to understand a domain well, any domain, will serve you better than any 'Uber for X' ever will.
My background is not in hardcore cs, ML, or AI but i majored in engineering and have worked as a software engineer for about 6 years.
Both breadth and depth are important with knowledge/skills and having diversified experiences at early ages (and throughout life) is valuable for "honing in on" what you want to do.
This is just to say maybe seeking out various computing "applications" wherever possible may help expose you to the different things out there. It could be neuroscience, medical imaging, nuclear fusion energy or whatever interesting thing presents itself. Since you mentioned starting a business understanding a variety of markets could also be valuable.
best of luck and keep hacking
edit formatting
I come from an automation background, and work as a tech lead, and I have absolutely no concerns about AI. Most people who worry about it don't fully understand what it is, and that it's ultimately just tools, albeit ones with more difficult to understand internals.
Nevertheless, strong communication skills are leverage on almost anything you do. That includes programming. They also can't be surpassed by AI in a non-apocalyptic world.
You can start by joining Toastmasters, and/or Rotary.
Communication skills can be further improved by understanding human psychology. If you're able to quickly detect personality disorders and other psychological minefields you'll encounter in the workplace, it'll bode well for you.
Again, no machine can understand a human better than another human, in anything short of an apocalyptic future.
What's left? stuff like kindergarten workers, nurses and police officers aren't likely to be replaced so soon..(does it bring you joy though doing that?).
You can also try catching some of the A.I boom and starting your own business, there's a lot of opportunity to be had in the coming 5-10 years probably.
AI can't refactor code, or debug (or at least debug well as far as I know). Folks here can correct me if I'm wrong, I stopped looking into AI once I realized it wasn't as far along as most non-technical people seemed to think.
Things like Stable Diffusion and OpenAI have made great strides, but they will only serve to enhance humans' skills and abilities. There are limitations to what these tools can do, I wouldn't worry too much about having to compete with them once you're in the work force.
An interesting topic for a science fiction novel, but that doesn't describe your experience today.
I studied ML at a PhD level. A lot of "neat tricks", but a whole lot of issues preventing it from taking over basic tasks like driving (let alone programming).
Just keep doing what you're doing and adapt to the AI revolution when it actually happens.
It's been "just 10 years away" since the 1960's. :)
There will also be a very long tail of "traditional" software engineering. But if you are still 4-5 years out from your "career" - you probably wont want to count on a traditional software engineering job.
The funny thing with problems is that they never end. Even if AI is effective against the current slate of problems, new ones tend to show up quickly.
Depending on how fast AI transforms the economy (which I'm in no position to begin to predict), but presuming some degree of speed that is unexpected by at least a good proportion of people, one of the things that will surely be disrupted the most is the idea of career planning for teenagers and high-schoolers.
Because surely, in line with the idea of the "singularity", one of the corollaries of AI disruption will be that the rate and predictability of disruption events increases. At some point, either there's no point on planning out a career, or the only plans that seem to make sense are either vague, snake-oil, or to focus on really basic fundamental things that entail a major shift in lifestyle and even philosophy like farming.
If anything like that pans out, with sufficient speed, I'd predict some hefty social disruption, like groups of people going hard on Ludditism. But also, I'd predict some general polarisation not unlike what appears to have transpired in the west politically, which I think can be attributable to a growing sense of uncertainty and misunderstanding of an increasingly complex and incomprehensible world.
I know I'm not answering your question. That's partly because, though I'm older than you, I've not settled well on a career in my life and am in a not too dissimilar, perhaps worse, scenario from you. But also, as someone who didn't see this year's wave of AI coming, my response is that I don't have an answer, and I think that that is, and that you're asking this question, the actual problem.
...
All that being said, AI pessimism can go too far, and the tone of my post above is certainly guilty of this. Recent AI developments are still very problematic and there's plenty of scope in tech jobs that aren't going to be touched by AI.
Nonetheless, in the spirit of this more optimistic position (for tech at least), I would predict that the blindspot many on here (and in tech in general) will be the experience of juniors with AI being involved more and more.
On one hand, I can see leveraging AI tools to be more productive and learn faster to become more common, especially amongst more adaptable younger people.
On the other hand, I can see AIs making the work of juniors harder, because the AI most easily supplants a junior's non-expert error prone work and so might raise the floor of the bare minimal expertise in the industry without providing accessible means of acquiring that expertise for newcomers. It may quickly become the case, for instance, that writing code in a language and having some vague and basic awareness of tools and concepts is no longer sufficient for a junior, but instead, you need to know the things that an AI doesn't but which are hard to learn without experience, like the pitfalls of combining two tools together, or what certain errors are likely to be caused by etc.
This is actually a process that has been a big part of recent news: the Boomer generation, being oversized and long-lived, sucked up a lot of the jobs. By the time their Millenial children arrived, a lot of career paths had no emerging prospects because the parents were still there, still in the same roles, and the world had become fantastically more competitive through globalization. But now they've finally begun to age out altogether, and that transition plays into the dynamic of high unrest, financial turmoil and anxiety around tech that we're experiencing.
The thing is, tech is something societies decide to invent pragmatically, based on what available science allows. We invent "automobile tech" because policy and available resources supported its widespread use in the richest parts of the world. In the ones destroyed by WWII, auto tech still existed but was complemented with reinvestment in rail tech.
So with ML AI, the tech is something we're currently looking for models of application for. The science is cool, and there are certain things it's great at, but contra the "learn AI" replies, that doesn't mean the "AI industry" is something you'll gain the most leverage from by approaching it at its most fundamental level, getting a degree in, and then simply signing up for a research job. That is one of the most competitive routes you could take, as the "easy part" of the field is already behind us, and now everything is going to be about little nuances of improving the tech and integrating it better.
What we currently see in terms of AI users is relatively unsophisticated application: people who log on to one of the apps, send a few basic prompts, and then stop there, satisfied with the result. But I believe the way to think about this is rather to become sufficiently AI-literate to use it to storm the gates of some other field, as a combination threat; a layered synthesis of traditional know-how and new methods. This is why some artists are unconcerned, while others are panicked; one group sees a way to enhance what they're doing with another layer of tech, another sees a threat to their routine illustration and asset creation gigs.
But in "storming the gates" you have to expect to arrive at a surprisingly empty field, where nobody knows what you are doing or whether it's valuable. It might take several years to build up visibility, just trying things and publishing results, before you find a path to monetize on it.
If the basic idea you're building from is coherent - it doesn't contain contradictory elements, and you can use it as a philosophical framework to assess the value of your output - you will stay motivated and eventually succeed. Study enough philosophy to get what it means to be coherent and build such a framework; it'll pay off.