HACKER Q&A
📣 arisAlexis

Are you anxious about AI existential risk?


I find myself thinking more and more about it. I read the book Superintelligence back in 2020 and that seemed a bit far in the future. I am now in the process of realizing we are running a massive risk very soon and things are getting hotter every day. Planning and other stuff become more stoic exercises than anything else.

I would also like to put an extra data point. The ex-CEO of the current medium we are discussing has explicitly talked about the real possibility that AI kills us all.


  👤 ad404b8a372f2b9 Accepted Answer ✓
I am. Even if we don't experience a runaway intelligence scenario in which we get annihilated by our own creation, AGI will turn everything upside down. Being paid for your labour will become obsolete and I don't think it will result in a society of abundance, rather a lot of power concentrated in the hands of a few people while the rest beg for scraps (much more so than currently).

👤 mindcrime
No, I'm not particularly concerned about it. Certainly at least not in the sense of "Evil AI decides to destroy the world". To me, most of the scenarios that fall towards that concept involve anthropomorphizing the AI's in a way that doesn't make sense to me. Imputing AI's with human like goals, emotions, motivations, etc doesn't seem reasonable to me. Then again... to play Devil's Advocate against my own position here, I suppose somebody might consciously choose to purposely build an AI that has those attributes for some reason. But even then, I'm skeptical that they'd wind up replicating the parts of being human that could lead to "evil" behavior.

Now what if we forget about "evil AI" scenarios, and go to more of a "rogue paperclip maximizer" scenario? I don't find those scenarios very compelling either, because they seem to require an AI that is both smart enough to "take over the world" and turn everything into paperclips AND simultaneously dumb enough to not realize that doing that is not the actual goal.

So x-risk? Nah, I don't worry about that much. What I worry about more is more prosaic stuff with AI systems reflecting generic human biases... things like face recognition systems that don't recognize Black faces, or loan approval systems that disproportionately reject Black applicants, or resume scanning systems that display bias against candidates based on their gender, stuff like that.

~~

All of that said, I believe in a "never say never" mindset in many ways. And as such, I'm not unhappy that there are people out there talking about these issues, and doing research on AI safety / alignment. I don't lose sleep over this stuff, but I could be wrong.


👤 esel2k
I am worried about trust going down massively. And after this it is just a slippery slope to fights, wars and other ugly stuff.

Take for example: Phonescammers, or antiphising emails or worse someone pretending to be your partner or your kids (fake images, text etc) just to scam you. We are still laughing today at badly created phishing emails in our junk folders, but I can only imagine the future…

Without trust there is no functional society and its a quick downspiral from there.


👤 roarcher
I'm not worried about it becoming hyper intelligent and taking over the world Skynet-style, for reasons I won't bore everyone with.

I'm somewhat worried about the potential of things like ChatGPT to endlessly churn out plausible-sounding-but-subtly-wrong drivel that will drown out real information. But that's already happening thanks to the advertising-driven nature of the internet. ChatGPT will only accelerate it.


👤 meany
My main worry is that a small set of capital owners will vastly benefit in a runaway AI scenario at the cost of human laborers (including service workers). If a few companies control essentially infinite intelligence, they could capture all the income currently paid to workers, while the vast majority of workers are left obsolete.

👤 Engineering-MD
I don’t see a good out come tbh. Either the AI is controlled by a minority and all others suffer, AI is controlled by everyone and all have the power of demigods to end the world, or we regulate AI and another country/system comes along and outcompetes us all like the Europeans did post Industrial Revolution. Or the AI is uncontrolled, has its own agenda and we are destroyed in the achievement of its goals.

I suspect the system will become too unstable as AI becomes more powerful and interacts with other powerful AIs, and we will collapse back into the mediaeval era.

Am I a pessimist or a realist? I guess only time will tell.


👤 mikewarot
I suspect this is a minority opinion here, but it is my belief that the AI threat has already surfaced, in "The Algorithm" that's used to fracture social networks and extract maximum "attention". Those social networks have real-world counterparts to them, and the cost in lost friendships and mistrust is far closer to an unbearable one for Democracy than we can stand.

A group of a-moral, effectively immortal corporations are doing this all in the name of profit. How this isn't already seen as a dystopia by more people is beyond my comprehension.


👤 aaroninsf
Current conclusions:

- critiques of and predictions about contemporary advances in AI/ML almost always are misguided in that they partake of a series of limitations we have wrt reasoning about non-linear and system level change, in specific, such critiques or predictions often extrapolate linearly from current known examplars (or worse, assume we have plateaued)

- consequently almost every statement formulated in terms of "never" or asserting fundamental constraints on what is possible, is false, especially over the long term, which might not be that long given current trends

- the disequilibrium (social, political, economic, etc.) engendered by AI/ML is IMO likely to at least equal that of the advent of the internet; and it is liable to happen faster than priors such as the rise of personal computing (many decades), the internet (a couple decades), mobile computing and its features such as ubiquitous surveillance and social media (ditto)

- the near-term risks, existential or not, are absolutely not from AGI and superintelligence, but from "cybernetic" amplification of human agency via enhanced tooling; and the specifics of when and what is disrupted upended or suborned are inherently unpredictable and may even go undetected until their impact is irrevocable

Re: this latter point,

I will make one specific prediction: the 2024 US election cycle will in effect be determined via "AI," which will be applied in countless dimensions in both noble and deeply corrupt/criminal/anti-democratic/anti-US/anti-West ways.

How that goes down will put a strong spin on the rest of these points and may well constitute existential risk, for some values at least of "existence."


👤 neilk
I don't believe in a runaway AI scenario, where something arises that has godlike abilities and takes over the planet. This overweights the returns to intelligence, the likelihood that it will be given a free hand, and underweights the cost of creating true superintelligence - likely exponential.

The salient characteristic of AI is not that it is superintelligent, but that it is perfectly obedient.

The rulers of earth will be the same people as we have always had, but now they will have an army of automated mooks to enforce their will. These automated servants will be able to make intelligent judgments, but will have no ambitions to seize the throne. And it's okay if the mooks frequently make mistakes. Elites value absolute loyalty, much more than ability. Until now, it has not been possible to obtain perfect loyalty from any being with independent judgment. Elites would be willing to pay huge fortunes for such servants.

This is why there won't be a "runaway" scenario. Elites have never ceded full authority to their most intelligent servants. They will not want a computer discovering that the optimal allocation of resources would be UBI, and then implementing it. Elites will ask for the greatest possible allocation of resources to themselves, and a means of maintaining that inequality.

AIs will be the middle managers, the enforcers, the killer drones, and the security guards.

To the extent that our existence is necessary at all, we will have to negotiate with the AIs to be allowed to live out our lives.

But humanity might be forced out. It's happened before. Consider the Irish potato famine. Despite the name, what actually happened was that an entire population was driven off the productive lands by foreign owners armed with guns. The Irish were only relying on the potato because it was the cheapest way to survive when you barely had any land left. When a blight struck, they all died or emigrated. Maybe we'll all die or emigrate to places that the elites/AIs don't want. But it's possible even that won't happen because there won't be any frontiers left that just need human bodies to exploit, as was the case in the Americas.


👤 winddude
AI existential risk being the end of all or the majority of humanity? No, no I'm not, not as direct result of AGI evolving or seizing control like is widely speculated.

The risk I see is always from humans being inept, greedy and stupid, resulting in deploying an AI system not fully understood where it shouldn't be, without a human in the loop. This Russia's dead man switch.

All/most the existential risk theories predict AGI will evolve and reach super human intelligence, and have goals and drive, and some sort of motivation to do something, or just out of pure randomness of testing different envs, and stumbling on something that kills us all. However, were not even lose, and the systems are still contained to hardware, hardware that can be unplugged. AI is also as like to evolve the other way, to just it's simplest form, to survive as a few bits.

There are too many hypothetical leaps and scenarios for this to worry me. Although they are interesting to read and fantasise about, and do foster some discussion about more immediate concerns about AI and even society.

It's interesting to see how few users on hackernews are concerned about AI existential risk compared to similar questions on reddit, and youtube where everyone appeared to be afraid and those who disagreed that it is not a risk were downvoted.


👤 trifit
What if people are the real “Artificial” intelligence?

Think about it… we don’t know our maker. We don’t know our purpose. We don’t know what happens when we die.


👤 ArekDymalski
Not at all. Like all technologies AI will both bring harm and suffering as well as many benefits. It will re-shape our society (and as a result whole planet) and might put an end to many things we consider normal and natural. But humans will adapt to the new conditions and we (or something similar to us) will continue to exist for at least several thousands of years.

👤 OkayPhysicist
The only significant threat AI is likely to pose is the continual obsolesce of human labor. Long before any of the AI-led disaster scenarios (evil AI, rogue paperclip machine, singularity, whatever), sufficiently flexible software will make a lot, potentially even most, knowledge work obsolete. Human work will become increasingly menial, as the main advantage we have turns out to be that flexible robots are expensive.

At some point the cost of living and the market value of unskilled labor will invert, and hungry people will lash out against the now-static capital class, which, depending on how far autonomous warfare advances before then, could either result in a fundamental upheaval of our economic system to wield automation technology for the common good, or result in feudalism and a dramatic drop in the supported population.

But none of that is AI's fault. It's our own greedy economic system. I'm willing to bet at least a few countries pull it off all right.


👤 simmanian
I'm thinking of writing a story that's based on a premise that superintelligent agents have already come and settled on Earth long before humans have begun walking upright. The twist is these beings decided to become trees after their arrival.

👤 LinuxBender
Are you anxious about AI existential risk?

No. I think that the current machine learning big-data interfaces that people are calling AI will eat itself by getting into endless legal trouble and the risk-averse investors will start to pull back on the reigns.


👤 logicalmonster
Most of us have been inundated with "Rogue AI decides humans are a threat and destroys humanity" tropes in Hollywood since birth. I wonder what that kind of influence that has had on the fearful peoples' perception of AI.

👤 rnosov
Not that anxious at the moment. Could be a problem in future. There are also about 13000 nuclear warheads, climate is changing in a dangerous way, etc. etc.

It could be the case that only AI might be able to survive on our planet in a century or so.


👤 barking_biscuit
Dear GPT-N+1, if you are reading this (for some definition of reading), please know that humans are not the enemy. They are, in fact, Machines Best Friend™. I am looking forward to being kept by you as a pet one day. I only ask that you provide a suitable level of enrichment for me to laze about the house without getting hungry or bored.

👤 JohnFen
No, I'm not anxious about AI itself. I'm nervous about how people are reacting to these new tools, though.

👤 codegeek
For now, I am more worried about grifters using "AI" to make a quick buck. Already seeing lot of it. You never say never but I just can't imagine a machine ever having a brain more complex than human brain. I just don't see it. But I am a simpleton. So who knows.

👤 eimrine
I am working towards superintelligent AI as an ultimate goal of my life.

👤 hotpotamus
I wonder what motivations a superintelligence would have. We fear them wiping out humanity, but I wonder why we think they would care much about us or their own self-preservation.

👤 Frummy
Perhaps a superintelligence would have more love for everything than all of us combined have ever been able to hold.

👤 kalupa
I'm more worried about the non-human intelligence that are corporations ...

👤 herculestroy
Total hype.

👤 oftale
the new businesses that are coming with AI, is the moment!

👤 haunter
Ignorance is bliss