I would also like to put an extra data point. The ex-CEO of the current medium we are discussing has explicitly talked about the real possibility that AI kills us all.
Now what if we forget about "evil AI" scenarios, and go to more of a "rogue paperclip maximizer" scenario? I don't find those scenarios very compelling either, because they seem to require an AI that is both smart enough to "take over the world" and turn everything into paperclips AND simultaneously dumb enough to not realize that doing that is not the actual goal.
So x-risk? Nah, I don't worry about that much. What I worry about more is more prosaic stuff with AI systems reflecting generic human biases... things like face recognition systems that don't recognize Black faces, or loan approval systems that disproportionately reject Black applicants, or resume scanning systems that display bias against candidates based on their gender, stuff like that.
~~
All of that said, I believe in a "never say never" mindset in many ways. And as such, I'm not unhappy that there are people out there talking about these issues, and doing research on AI safety / alignment. I don't lose sleep over this stuff, but I could be wrong.
Take for example: Phonescammers, or antiphising emails or worse someone pretending to be your partner or your kids (fake images, text etc) just to scam you. We are still laughing today at badly created phishing emails in our junk folders, but I can only imagine the future…
Without trust there is no functional society and its a quick downspiral from there.
I'm somewhat worried about the potential of things like ChatGPT to endlessly churn out plausible-sounding-but-subtly-wrong drivel that will drown out real information. But that's already happening thanks to the advertising-driven nature of the internet. ChatGPT will only accelerate it.
I suspect the system will become too unstable as AI becomes more powerful and interacts with other powerful AIs, and we will collapse back into the mediaeval era.
Am I a pessimist or a realist? I guess only time will tell.
A group of a-moral, effectively immortal corporations are doing this all in the name of profit. How this isn't already seen as a dystopia by more people is beyond my comprehension.
- critiques of and predictions about contemporary advances in AI/ML almost always are misguided in that they partake of a series of limitations we have wrt reasoning about non-linear and system level change, in specific, such critiques or predictions often extrapolate linearly from current known examplars (or worse, assume we have plateaued)
- consequently almost every statement formulated in terms of "never" or asserting fundamental constraints on what is possible, is false, especially over the long term, which might not be that long given current trends
- the disequilibrium (social, political, economic, etc.) engendered by AI/ML is IMO likely to at least equal that of the advent of the internet; and it is liable to happen faster than priors such as the rise of personal computing (many decades), the internet (a couple decades), mobile computing and its features such as ubiquitous surveillance and social media (ditto)
- the near-term risks, existential or not, are absolutely not from AGI and superintelligence, but from "cybernetic" amplification of human agency via enhanced tooling; and the specifics of when and what is disrupted upended or suborned are inherently unpredictable and may even go undetected until their impact is irrevocable
Re: this latter point,
I will make one specific prediction: the 2024 US election cycle will in effect be determined via "AI," which will be applied in countless dimensions in both noble and deeply corrupt/criminal/anti-democratic/anti-US/anti-West ways.
How that goes down will put a strong spin on the rest of these points and may well constitute existential risk, for some values at least of "existence."
The salient characteristic of AI is not that it is superintelligent, but that it is perfectly obedient.
The rulers of earth will be the same people as we have always had, but now they will have an army of automated mooks to enforce their will. These automated servants will be able to make intelligent judgments, but will have no ambitions to seize the throne. And it's okay if the mooks frequently make mistakes. Elites value absolute loyalty, much more than ability. Until now, it has not been possible to obtain perfect loyalty from any being with independent judgment. Elites would be willing to pay huge fortunes for such servants.
This is why there won't be a "runaway" scenario. Elites have never ceded full authority to their most intelligent servants. They will not want a computer discovering that the optimal allocation of resources would be UBI, and then implementing it. Elites will ask for the greatest possible allocation of resources to themselves, and a means of maintaining that inequality.
AIs will be the middle managers, the enforcers, the killer drones, and the security guards.
To the extent that our existence is necessary at all, we will have to negotiate with the AIs to be allowed to live out our lives.
But humanity might be forced out. It's happened before. Consider the Irish potato famine. Despite the name, what actually happened was that an entire population was driven off the productive lands by foreign owners armed with guns. The Irish were only relying on the potato because it was the cheapest way to survive when you barely had any land left. When a blight struck, they all died or emigrated. Maybe we'll all die or emigrate to places that the elites/AIs don't want. But it's possible even that won't happen because there won't be any frontiers left that just need human bodies to exploit, as was the case in the Americas.
The risk I see is always from humans being inept, greedy and stupid, resulting in deploying an AI system not fully understood where it shouldn't be, without a human in the loop. This Russia's dead man switch.
All/most the existential risk theories predict AGI will evolve and reach super human intelligence, and have goals and drive, and some sort of motivation to do something, or just out of pure randomness of testing different envs, and stumbling on something that kills us all. However, were not even lose, and the systems are still contained to hardware, hardware that can be unplugged. AI is also as like to evolve the other way, to just it's simplest form, to survive as a few bits.
There are too many hypothetical leaps and scenarios for this to worry me. Although they are interesting to read and fantasise about, and do foster some discussion about more immediate concerns about AI and even society.
It's interesting to see how few users on hackernews are concerned about AI existential risk compared to similar questions on reddit, and youtube where everyone appeared to be afraid and those who disagreed that it is not a risk were downvoted.
Think about it… we don’t know our maker. We don’t know our purpose. We don’t know what happens when we die.
At some point the cost of living and the market value of unskilled labor will invert, and hungry people will lash out against the now-static capital class, which, depending on how far autonomous warfare advances before then, could either result in a fundamental upheaval of our economic system to wield automation technology for the common good, or result in feudalism and a dramatic drop in the supported population.
But none of that is AI's fault. It's our own greedy economic system. I'm willing to bet at least a few countries pull it off all right.
No. I think that the current machine learning big-data interfaces that people are calling AI will eat itself by getting into endless legal trouble and the risk-averse investors will start to pull back on the reigns.
It could be the case that only AI might be able to survive on our planet in a century or so.