HACKER Q&A
📣 w4ffl35

Is the AI Apocalypse Imminent?


I feel there is a lot of FUD surrounding AI, especially on Twitter where people don't understand the technology. I've seen posts about Software Engineer jobs vanishing in 5 years thanks to ChatGPT. Artists are terrified that they will lose their lively-hood thanks to StableDiffusion etc.

Most of this chatter I have seen on Twitter, but there are artist communities battling AI communities on Reddit as well.

I personally think that everyone will just need to adapt as it has always been, but I am curious what others on HN think.

Are professionals in for a rude awakening? Are artists and software engineers and writers really going to be replaced with AI?

Will software engineering involve product managers talking to ChatGPT instead of Engineers, and if we're still in the mix, will our salaries be substantially reduced?

Obviously the technology will have SOME impact, even if there is no "apocalypse", so how should professionals be viewing this?

What are the best ways to prepare for the inevitable shift? And what should the message to the scared / confused public be?


  👤 Rzor Accepted Answer ✓
I mean I'm still waiting for the impact of Tabnine/Copilot that people talked about last year. I'm in the camp where a charitable view is needed and AI will actually give us more power. You still need to know what is doing and be wary of any "overconfidence/hallucination". Let's say that it gets really good and now a senior developer could do the job of other 5-10 devs, God knows how; that could also mean that a lot of small to medium companies would be able to immensely pump their productivity, more with less and all, and perhaps more growth and openings.

Maybe I'm not seeing the leopard that will eventually eat my face, but in the worst case scenario, I don't think it's happening quite that fast, and if it is, it's probably more boring than we are imagining, unseen consequences and all. It's just that hard to predict the future.


👤 theGnuMe
I see these things right now as power tools. But that view might change. They also let people be creative without necessarily needing an army of others.

So take video game art. Looks like you could train an AI to generate all of that. And if can't it will happen soon. That will probably empower current digital artists and give them more capacity. It will also allow smaller shops to produce higher quality art perhaps with a creative director running prompts through the AI model vs hiring digital artists. However at some point the whole thing becomes quite complex to manage so you may have artists anyway.

At some point we will probably get prompts to movies as well.

prompts to SQL will probably happen as well as prompts to code (has already happened). This will first be code that a dev will refine. It can be dangerous because of subtle implications but that will eventually work itself out. So expect the same pattern as with digital artists for dev work. However at some point the whole thing becomes quite complex to manage so you may have devs anyway.

There will be prompt based no code solutions for business analysts as well. Will this replace the business analyst? Probably not, will it allow you to do more with less, probably. Will it scale, maybe not, you still might need a bunch of analysts to wrangle all of the systems.

In any case scale and growth will probably mean you need more people unless you can design the overall system well.

So in some sense we all become managers with little AI bots doing the IC work.


👤 f0e4c2f7
In the industrial revolution you suddenly had all this technology that could replace manual labor.

But it actually took a while for companies to adapt and make use of that.

Marc Andreeson talks about this idea that new technologies follow a cycle. First they're ignored, then people fight them, then they settle on calling people names for using it.

You can look back on the industrial revolution, or more recently the internet for an idea of what that pattern looks like. Some companies might adapt fairly fast but I suspect that will be the rarity.

Instead what you'll have are small groups of individuals, highly leveraged by AI come in and make new products that wholesale replace non-ai companies. Some old companies will acquire new ones in time and survive, many won't.

One early example might be Lensa[0]. They use stable diffusion on their backend against a paid iPhone app. Pretty simple stack there, not even training any models themselves. And yet - they're now doing $1M/day in revenue.

We're going to see a lot more of these.

Big companies will "try" too but they'll mostly just have meetings and powergrabs about trying. The 2020s are the decade of the startup.

[0] https://apps.apple.com/us/app/lensa-ai-photo-video-editor/id...


👤 randomNumber7
Talking with a chatbot, so that it produces code could still be considered programming imo. So even if it works perfectly, you still need programmers, the job will just be different.

So I think as a programmer you don't have to worry, but the other implications will probably be hughe.


👤 FlyingSnake
AI/GPT will most likely end up being a great tool in our toolkit like the previous innovations. Using it as an adversary to reduce developer headcount will end up in disaster. Just look at 4GLs, RUP, NoCode and other failed paradigms from recent history.

AI would be a great addition to tools like VCS, RDBMS, CI/CD paradigms, testing and should help developers in writing better and robust systems.


👤 pydry
The industrial revolution didn't create 90% unemployment. It changed the nature of 90% of jobs but it didn't cause unemployment in and of itself. This doesn't terrify me.

What was truly terrifying about the industrial revolution was the way it upgraded the horrors of warfare to an entirely new level and precipitated a world war brought about by the shifts in the relative power of dominant empires at the time.

I don't think AI will put us out of a job. I do think it could trigger terrifying new kinds of warfare and oppression.

I reckon there will be one or more Ottomans - dominant world powers who do not adjust to the new technological realities and get crushed as a result.


👤 rapjr9
One problem with the current rash of AI's in the news is they are only backward looking. They only "know" that which was in the training data up until they were trained. So to incorporate new things you have to retrain them, which could be an enormous expense to do continually. Also, if people start to rely on such AI's then the training data disappears and is replaced by AI output, so the whole thing is likely to get locked in a loop and never change and biases will also get locked in. So if you want to generate a dance video say, and the AI was trained before Michael Jackson was around you'll probably never be able to generate a Michael Jackson style video. However, there is also human input and guidance that is the source from which the AI generates output, so the quality and usefulness of the output will depend on how precisely the input can be specified. I agree with others (on other forums) who suggest that this very quickly turns into something like a programming language, where specificity becomes important for generating good content, which requires...human skill. Still, that seems useful, if not a replacement for people, an enhancer of people.

👤 thunfischtoast
I don't see the imminent main threat for professionals to be honest. Even today, the worth of a professionals work (be it artist or developer) is not the ability to look up how to do something specific or to churn out good looking, but random results, but to create results that are coherent on a long-term and and fit into the customers needs.

Example: AI can now generate great single concept arts, but in my opinion it will still take some time until it can do it coherently for a full project where everything needs to fit together. In the same manner the developer needs to write code that fits into existing systems. Both can of course profit already from AI today, but they are not to be replaced as easily.

The way bigger threat lies in all the social aspects of the internet. It's hard already to weed out all the crap when I want to find something specific e.g. on Youtube. I imagine it will be even harder when I need to filter through the low-quality generated content that will be uploaded just for the numbers. Also I see non-curated online discussion platforms and comment sections dying: How am I supposed to properly discuss when everytime I take a stance there will be instantly 10 bots screaming back at me?


👤 Madmallard
To me it seems like ChatGPT will amplify the power of existing programmers. Think how much more productive you could be if you could basically speedrun large portions of necessarily tedious and long simple or boilerplate sections of code bases. Likewise with ultra tedious things like connecting to AWS or other third party middleware.

👤 gaurangt
I see systems like ChatGPT and Stable Diffusion as "tools" that would aid us in our jobs.

Software Engineers or artists' jobs aren't going to "vanish" instantaneously because of AI; instead, it would make our lives easier.

Low-level menial, entry-level tasks like writing basic, repetitive code or basic design tasks would vanish or slowly phase away. Higher-level functions which require a lot of creativity and critical thinking won't be replaced with AI, at least for a VERY long time.

As it is currently, ChatGPT behaves more like a programmer who is just learning how to code. Just like Photoshop or Figma is a tool for designers, Software Engineers will soon start using ChatGPT to automate certain mundane tasks.

We are already doing that on sites like StackOverflow, where we find Regexes or stuff like that.


👤 fsloth
Generally automation so far has not replaced labour, but increased productivity and by removing rote work changing the nature of work instead of removing the need for human work.

The future is not about everyone becoming unemployed. The future is one where everyone has their personal army of secretaries.

I bring up the analogue of the renaissance master painter, who often had studios of apprentices. When preparing a huge painting, instead of doing all by themselves, they let their apprentices paint the easy bits and then they did the hard parts (if needed) and signed the work away.

The downside is of course that the need for apprentices shrinks - but then again everyone can have their own art studio (when previously only few super stars could afford one).


👤 wnkrshm
Imagine you're in an SME and your CEO, who doesn't know about how difficult things are to implement, asking chatGPT about certain topics and permanently, secretly believing the thing in parts.

That's the real nightmare, not which part of the implementation goes where.


👤 chriskanan
I think we are going to move toward a world where many jobs have a co-pilot and some abilities will be democratized in that less personal investment will be needed to become highly proficient from an output perspective. In the slightly longer term, I think we will move toward interacting with machines similar to how people interacted with "the computer" in Star Trek TNG, which really seems like just a really advanced co-pilot.

👤 PaulHoule
I was a bit of a hacker in high school and when I was in college I got enlisted by a friend to "steal" somebody else's CS 101 homework.

It was no problem finding an unprotected home directory with a solution in it, but we found the program didn't work. In the end we had to not only modify the program enough to not get caught, but we had fix the bugs in it.

Little did I know what good preparation this would be for my career in software development!

Devastated by the impact of a $100 million project failure I took an underpaid job for a small but proud web development shop based at a Superfund site were I completed roughly 20 projects that other programmers had started in about 9 months. It was the most acute example of something I'd experienced a lot in my career, both before and after, where somebody, anybody from a complete fresher to a certified genius to somebody getting a masters in A.I. because they really needed the intelligence, built something they couldn't finish and left behind a product that looked promising but needed serious rework to get it in front of customers. (... Then we ran out of projects, I cracked, and two days later got a job at the other web development shop that was landing all the new contracts that we were failing to get.)

I see GPT-3 as that fresher programmer who can make things that look promising to management but in the end turn out to need a huge amount of rework to put in front of customers. For a time I was greatly resentful that somebody would seem to do the "20% of the work that gets 80%" of the results and it seemed I'd do the "80% of the work that gets 20% of the results" and have people complain I took to long to do things, even during my annus mirabalus at Spider Graphics or many other times I'd saved a project that had been circling the drain for years.

GPT-3 has a hypnotic ability to get away with making mistakes which I think is a product of it being trained to produce the token with the highest probability. Like Andy Warhol, it is actively anti-creative.

Fixing the hard-to-find mistakes that it makes will be a maddening job and people will always be looking for ways to push the bubble out from under the rug and not realize the machine they are trying to build is impossible for fundamental logical reasons. I think of the dialogs of Achilles and the Tortoise from

https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach

where they are trying to build impossible machines and repeatedly failing because they have no idea that what they're trying to do is impossible. I've had people say GEB is a critique of the old symbolic AI but neural networks don't repeal the fundamental results of mathematical logic and computer science.

Sure, you can escape Gödel's theorem by building a system that doesn't get the right answer but then you have a system that doesn't get the right answer.


👤 ineedausername
You don't need to prepare, you just adapt and this happens subconsciously. We all become the machine and it becomes all of us, which is inevitable.

👤 licebmi__at__
It's not FSC (Full self coding), but I definitely see copilot increasing my output by helping me write boring parts. I'm thinking that maybe we will see an impact kinda like the industrial revolution, knowledge workers won't be obsolete, but the value of their work might be greatly devaluated.

👤 tluyben2
Not saying the current iteration, but it's the first time in my life I think something is happening [0].

[0] https://twitter.com/luyben/status/1600663169353015297


👤 seydor
each time it 's the same fud, though in this moment the Fear factor is bigger than ever before, but not because things are awful, it's becase people have been conditioned to yell every time there is something new.

GPTs are an explosive multiplier for productivity. They will become the new baseline and people will be asked to do more with them. Those that can't keep up will lose jobs etc, but not the majority. We are in for a huge jump in productivity.

The best we can do is educate the public about the existence of these tools. It's crazy that people have been trained to dismiss them because of bad press in the past 10 years. We can't really stop technology so we better join the ride


👤 sinuhe69
I still in the process of working out how to put a character/a face consistently in a series of SD productions. It’s one of the most basic tasks artist have to work with everyday.

👤 GoblinSlayer
If webdevs are replaced with AI, maybe it will bother to write interoperable pages.

👤 zeruch
Cue the Butlerian Jihad references...

👤 HybridCurve
I am infuriated every time someone mentions the threat AI poses, the media and others rant about some existential crisis mankind faces of either some type of super intelligent machine, some killer robot, or some other ridiculous nonsense.

The truth is that AI models, right now, are amplifying sensational and provocative content across our shared information sphere. AI chooses what information we get when we search on the internet or look at our social media feeds. Models trained from data collected from social media can be used to manipulate public opinion. Any data derived from contentious social interactions across platforms can be employed in psychological operations against other groups by hostile actors to generate strife in a select population. These things are very real threats and as a result people suffering today. All the rest of the AI fear mongering is such a distraction for the general populace that it almost functions as straw-man threat or controlled opposition crammed into a headline: "Should we be worried about military's new Skynet project? Experts say, 'No'". But when someone tries to explain to others the threat of something inane like Facebook, there is general disbelief that it could be a catalyst for something like genocide. We should be reminded of this every time this subject comes up.

But, to answer your question: No position which requires depth or broad scope of knowledge across fields will be at risk. You cannot replace a software engineer with AI. Artists will experience a shift in the market and they stand to lose if copyright cannot protect their work from being assimilated into training sets without permission. There will also be new positions opening in different industries to employ, train and maintain these newer generative systems. Artists I know have already be working with generative models like Stable Diffusion because they find it intriguing. Overall it will not be catastrophic.


👤 than3
The answer to your question all depends on what your definition of imminent is.

If you mean imminent as in inevitable, then the answer is likely yes, because the tools have been released publicly, and any regulation to the contrary will engender certain compromises whose outcome may still not be sufficiently changed because the internet is forever. Someone will do it eventually.

Artists are scared because their creative style can be stolen just by sampling their art.

Service and professional positions are scared because most work is procedural, the middle class positions are being automated away right now, and then the lower class ones will do the same eventually, as everything progresses you will have Von Neumann Machines operating at the behest of corporations, no workers.

Given the rate of development and improvement, it is not outside the realm of possibility that someone finds a way to develop true AI, not this Pseudo Intelligence that we have right now.

That involves machines capable of breaking the fundamental theory of computation which defines classes of problems that machines as we know them can't solve but humans can.

If that problem is ever solved, society will likely completely collapse in short order.

The economy runs on people working through the benefit of a division of labor. By trading time and productive effort for money which they can then use to survive and purchase food.

If machines can completely replace human workers, any human productive effort will be possible for a computer at that point, they do more production, are fixed cost/lifetime, and don't unionize, and do not tire. The cost trend is clear.

If people can't get food, or have too much time on their hands, and have no hope for the future, or a need or opportunity to become educated, are disenfranchised, disempowered, stripped of agency and voice, then unrest occurs as resources become ever more strained and accumulated by the top.

You end with either MAD between humans, where we go extinct, or enslavement of the whole by a small group of people where most people are no better than cattle, or eradication where a quirk of AI decides to simplify systems by removing and minimizing non-deterministic factors (humans).

Those are the most common theories if thinking machines are not outlawed and destroyed with all global governments on the same page, and related research punishable by death and even then, that may not be enough.

If you want to learn more about why things work the way they do, Adam Smith 1778, Wealth of Nations is a good starting point to understand the division of labor, productive effort, and basics involved with the economy.

Jared Diamond did a decent book called Collapse as well, its dry though.

The odds of survival long term are not good, and that's just a simplified version not including climate change or anything like that. Obviously univariate analysis is of limited worth, but the thing about probabilities is given enough time any possible outcome will eventually happen no matter how remote. If we cannot match and compete at the pace of change because of physical or biological constraints we'll go extinct.


👤 yrgulation
Depends on how much people enjoy micro dosing. Also depends on the language and complexity of your work. If all you do is build binary trees and reinvent the wheel then yes your job is over.