Most of this chatter I have seen on Twitter, but there are artist communities battling AI communities on Reddit as well.
I personally think that everyone will just need to adapt as it has always been, but I am curious what others on HN think.
Are professionals in for a rude awakening? Are artists and software engineers and writers really going to be replaced with AI?
Will software engineering involve product managers talking to ChatGPT instead of Engineers, and if we're still in the mix, will our salaries be substantially reduced?
Obviously the technology will have SOME impact, even if there is no "apocalypse", so how should professionals be viewing this?
What are the best ways to prepare for the inevitable shift? And what should the message to the scared / confused public be?
Maybe I'm not seeing the leopard that will eventually eat my face, but in the worst case scenario, I don't think it's happening quite that fast, and if it is, it's probably more boring than we are imagining, unseen consequences and all. It's just that hard to predict the future.
So take video game art. Looks like you could train an AI to generate all of that. And if can't it will happen soon. That will probably empower current digital artists and give them more capacity. It will also allow smaller shops to produce higher quality art perhaps with a creative director running prompts through the AI model vs hiring digital artists. However at some point the whole thing becomes quite complex to manage so you may have artists anyway.
At some point we will probably get prompts to movies as well.
prompts to SQL will probably happen as well as prompts to code (has already happened). This will first be code that a dev will refine. It can be dangerous because of subtle implications but that will eventually work itself out. So expect the same pattern as with digital artists for dev work. However at some point the whole thing becomes quite complex to manage so you may have devs anyway.
There will be prompt based no code solutions for business analysts as well. Will this replace the business analyst? Probably not, will it allow you to do more with less, probably. Will it scale, maybe not, you still might need a bunch of analysts to wrangle all of the systems.
In any case scale and growth will probably mean you need more people unless you can design the overall system well.
So in some sense we all become managers with little AI bots doing the IC work.
But it actually took a while for companies to adapt and make use of that.
Marc Andreeson talks about this idea that new technologies follow a cycle. First they're ignored, then people fight them, then they settle on calling people names for using it.
You can look back on the industrial revolution, or more recently the internet for an idea of what that pattern looks like. Some companies might adapt fairly fast but I suspect that will be the rarity.
Instead what you'll have are small groups of individuals, highly leveraged by AI come in and make new products that wholesale replace non-ai companies. Some old companies will acquire new ones in time and survive, many won't.
One early example might be Lensa[0]. They use stable diffusion on their backend against a paid iPhone app. Pretty simple stack there, not even training any models themselves. And yet - they're now doing $1M/day in revenue.
We're going to see a lot more of these.
Big companies will "try" too but they'll mostly just have meetings and powergrabs about trying. The 2020s are the decade of the startup.
[0] https://apps.apple.com/us/app/lensa-ai-photo-video-editor/id...
So I think as a programmer you don't have to worry, but the other implications will probably be hughe.
AI would be a great addition to tools like VCS, RDBMS, CI/CD paradigms, testing and should help developers in writing better and robust systems.
What was truly terrifying about the industrial revolution was the way it upgraded the horrors of warfare to an entirely new level and precipitated a world war brought about by the shifts in the relative power of dominant empires at the time.
I don't think AI will put us out of a job. I do think it could trigger terrifying new kinds of warfare and oppression.
I reckon there will be one or more Ottomans - dominant world powers who do not adjust to the new technological realities and get crushed as a result.
Example: AI can now generate great single concept arts, but in my opinion it will still take some time until it can do it coherently for a full project where everything needs to fit together. In the same manner the developer needs to write code that fits into existing systems. Both can of course profit already from AI today, but they are not to be replaced as easily.
The way bigger threat lies in all the social aspects of the internet. It's hard already to weed out all the crap when I want to find something specific e.g. on Youtube. I imagine it will be even harder when I need to filter through the low-quality generated content that will be uploaded just for the numbers. Also I see non-curated online discussion platforms and comment sections dying: How am I supposed to properly discuss when everytime I take a stance there will be instantly 10 bots screaming back at me?
Software Engineers or artists' jobs aren't going to "vanish" instantaneously because of AI; instead, it would make our lives easier.
Low-level menial, entry-level tasks like writing basic, repetitive code or basic design tasks would vanish or slowly phase away. Higher-level functions which require a lot of creativity and critical thinking won't be replaced with AI, at least for a VERY long time.
As it is currently, ChatGPT behaves more like a programmer who is just learning how to code. Just like Photoshop or Figma is a tool for designers, Software Engineers will soon start using ChatGPT to automate certain mundane tasks.
We are already doing that on sites like StackOverflow, where we find Regexes or stuff like that.
The future is not about everyone becoming unemployed. The future is one where everyone has their personal army of secretaries.
I bring up the analogue of the renaissance master painter, who often had studios of apprentices. When preparing a huge painting, instead of doing all by themselves, they let their apprentices paint the easy bits and then they did the hard parts (if needed) and signed the work away.
The downside is of course that the need for apprentices shrinks - but then again everyone can have their own art studio (when previously only few super stars could afford one).
That's the real nightmare, not which part of the implementation goes where.
It was no problem finding an unprotected home directory with a solution in it, but we found the program didn't work. In the end we had to not only modify the program enough to not get caught, but we had fix the bugs in it.
Little did I know what good preparation this would be for my career in software development!
Devastated by the impact of a $100 million project failure I took an underpaid job for a small but proud web development shop based at a Superfund site were I completed roughly 20 projects that other programmers had started in about 9 months. It was the most acute example of something I'd experienced a lot in my career, both before and after, where somebody, anybody from a complete fresher to a certified genius to somebody getting a masters in A.I. because they really needed the intelligence, built something they couldn't finish and left behind a product that looked promising but needed serious rework to get it in front of customers. (... Then we ran out of projects, I cracked, and two days later got a job at the other web development shop that was landing all the new contracts that we were failing to get.)
I see GPT-3 as that fresher programmer who can make things that look promising to management but in the end turn out to need a huge amount of rework to put in front of customers. For a time I was greatly resentful that somebody would seem to do the "20% of the work that gets 80%" of the results and it seemed I'd do the "80% of the work that gets 20% of the results" and have people complain I took to long to do things, even during my annus mirabalus at Spider Graphics or many other times I'd saved a project that had been circling the drain for years.
GPT-3 has a hypnotic ability to get away with making mistakes which I think is a product of it being trained to produce the token with the highest probability. Like Andy Warhol, it is actively anti-creative.
Fixing the hard-to-find mistakes that it makes will be a maddening job and people will always be looking for ways to push the bubble out from under the rug and not realize the machine they are trying to build is impossible for fundamental logical reasons. I think of the dialogs of Achilles and the Tortoise from
https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach
where they are trying to build impossible machines and repeatedly failing because they have no idea that what they're trying to do is impossible. I've had people say GEB is a critique of the old symbolic AI but neural networks don't repeal the fundamental results of mathematical logic and computer science.
Sure, you can escape Gödel's theorem by building a system that doesn't get the right answer but then you have a system that doesn't get the right answer.
GPTs are an explosive multiplier for productivity. They will become the new baseline and people will be asked to do more with them. Those that can't keep up will lose jobs etc, but not the majority. We are in for a huge jump in productivity.
The best we can do is educate the public about the existence of these tools. It's crazy that people have been trained to dismiss them because of bad press in the past 10 years. We can't really stop technology so we better join the ride
The truth is that AI models, right now, are amplifying sensational and provocative content across our shared information sphere. AI chooses what information we get when we search on the internet or look at our social media feeds. Models trained from data collected from social media can be used to manipulate public opinion. Any data derived from contentious social interactions across platforms can be employed in psychological operations against other groups by hostile actors to generate strife in a select population. These things are very real threats and as a result people suffering today. All the rest of the AI fear mongering is such a distraction for the general populace that it almost functions as straw-man threat or controlled opposition crammed into a headline: "Should we be worried about military's new Skynet project? Experts say, 'No'". But when someone tries to explain to others the threat of something inane like Facebook, there is general disbelief that it could be a catalyst for something like genocide. We should be reminded of this every time this subject comes up.
But, to answer your question: No position which requires depth or broad scope of knowledge across fields will be at risk. You cannot replace a software engineer with AI. Artists will experience a shift in the market and they stand to lose if copyright cannot protect their work from being assimilated into training sets without permission. There will also be new positions opening in different industries to employ, train and maintain these newer generative systems. Artists I know have already be working with generative models like Stable Diffusion because they find it intriguing. Overall it will not be catastrophic.
If you mean imminent as in inevitable, then the answer is likely yes, because the tools have been released publicly, and any regulation to the contrary will engender certain compromises whose outcome may still not be sufficiently changed because the internet is forever. Someone will do it eventually.
Artists are scared because their creative style can be stolen just by sampling their art.
Service and professional positions are scared because most work is procedural, the middle class positions are being automated away right now, and then the lower class ones will do the same eventually, as everything progresses you will have Von Neumann Machines operating at the behest of corporations, no workers.
Given the rate of development and improvement, it is not outside the realm of possibility that someone finds a way to develop true AI, not this Pseudo Intelligence that we have right now.
That involves machines capable of breaking the fundamental theory of computation which defines classes of problems that machines as we know them can't solve but humans can.
If that problem is ever solved, society will likely completely collapse in short order.
The economy runs on people working through the benefit of a division of labor. By trading time and productive effort for money which they can then use to survive and purchase food.
If machines can completely replace human workers, any human productive effort will be possible for a computer at that point, they do more production, are fixed cost/lifetime, and don't unionize, and do not tire. The cost trend is clear.
If people can't get food, or have too much time on their hands, and have no hope for the future, or a need or opportunity to become educated, are disenfranchised, disempowered, stripped of agency and voice, then unrest occurs as resources become ever more strained and accumulated by the top.
You end with either MAD between humans, where we go extinct, or enslavement of the whole by a small group of people where most people are no better than cattle, or eradication where a quirk of AI decides to simplify systems by removing and minimizing non-deterministic factors (humans).
Those are the most common theories if thinking machines are not outlawed and destroyed with all global governments on the same page, and related research punishable by death and even then, that may not be enough.
If you want to learn more about why things work the way they do, Adam Smith 1778, Wealth of Nations is a good starting point to understand the division of labor, productive effort, and basics involved with the economy.
Jared Diamond did a decent book called Collapse as well, its dry though.
The odds of survival long term are not good, and that's just a simplified version not including climate change or anything like that. Obviously univariate analysis is of limited worth, but the thing about probabilities is given enough time any possible outcome will eventually happen no matter how remote. If we cannot match and compete at the pace of change because of physical or biological constraints we'll go extinct.