At first glance Yudkowski's recommendations seem pretty hyperbolic. But now I'm wondering if this guy is actually taking the "long view" and will end up being remembered as one of the great thinkers.
The AI/ML revolution is just getting started, and it promises to upend much of human society. I'd say a backlash is inevitable, and will intensify as the tech gets more and more capable. Given that there WILL be a major backlash in the future (far more major than what we see today), I think Yudkowski may have positioned himself as the "grandfather" of a new, budding ideology defined by anti-technology and anti-AI views. Like Smith or Marx before him, Yudkowski makes provocative proclamations about a concept that seems like it will define our coming century. I wonder if what he's said so far is already enough to make him the "face" of an anti-AI movement that in my opinion will likely grow to rival even the most powerful ideologies of history.
EDIT: To clarify, basically what I'm trying to say here is that Yudkowski is being farsighted by being the first one to claim this "shut it all down" position. Many people are going to join this camp in the future and now it may well be known as "Yudkowskiism" or something like that.
But, sure, he is getting out in front of and making himself a figurehead for a likely wave of future violent paranoia in a way which may well make him a movement leader, sort of a cross between the Unabomber and Usama bin Laden.
God, let's hope not.
If you make enough predicitons, some are [almost] guaranteed to come true :)
I've never heard of this Yudkowsky fella before you mentioned him - which may not mean much (or maybe it does ... I think I'm pretty aware of "big names" in a lot of arenas)
Is there any reason we should listen to him, in particluar, over and above every else pontificating on the short-, medium-, and long-term impacts and implications of ML?
>Given that there WILL be a major backlash in the future (far more major than what we see today)
Upon what do you base this opinion?
And to be a movement leader he would have to be a bit more charismatic, but he lingered in obscurity for so long that I don't really buy it. He had some very stimulating conversations, we are so starved for intelligent conversations but I doubt that will ever have mainstream appeal.
I would reject the idea it's some sort of long-con, he strikes me as a very genuine person that is a strong believer in what he is saying. More like a Richard Stallman type figure, harmless.
I wished he would write more fiction, HPMOR was so good.
If not, can it be now?
The problem with Yudkowsky is that he's too smart for his own good. We need to stop talking about AI safety like it's a field of philosophy and explain the threat very plainly to people who don't understand how neural nets work.
Those who disagree with him from what I see generally fall into three camps:
1. The 99.5% of the population who are not educated enough to speak on the subject. These are the kinds of people who are so misinformed they think AGI will be good because it's smart or because it's "programmed" by humans.
2. Misaligned actors. Eg, CTOs and CEOs of AI startups. These people have both a financial risk and reputation risk from talking about the dangers of their hobby. However, these people are unfortunately the media's favourite gotos when they want to understand the dangers of the technology.
3. Naive techno optimists. These are the people who don't even bother making counter arguments and instead rely on slandering people like Yudkowsky as doomers or as being anti-progress. They generally think AGI will be good because some incomparable past technology was good.
I've literally never had a good debate on this subject and I've been debating with people about this for over a decade now. It's frustrating to the point of exhaustion trying to explain the risks. Every now and then you come across someone who gets it and that's nice, but they're hard to find. Generally these people are educated and are familiar with ML algorithms despite not having their reputation and income tied to it.
So yeah, if you couldn't tell I think Yudkowsky is a genius and way ahead of most people on this, but by the time humanity realises it it will be too late.
Alarmist, but not a thinker. I haven't been able to trace his line of thought beyond "you must panic now"
To give you a picture of where the culture eventually ended up, here's a list of some things I experienced by the end of my time there:
1. 2-6hr long group debugging sessions in which we as a sub-faction(Alignment Group) would attempt to articulate a "demon" which had infiltrated our psyches from one of the rival groups, its nature and effects, and get it out of our systems using debugging tools.
2. People in my group commenting on a rival group having done "magic" that weekend and clearly having "powered up," and saying we needed to now go debug the effects of being around that powered-up group, which they said was attacking them or otherwise affecting their ability to trust their own perception
3. Accusations being thrown around about people leaving "objects" (similar to demons) in other people, and people trying to sort out where the bad "object" originated and if it had been left intentionally/malevolently or by accident/subconsciously.
4. People doing seances seriously. I was under the impression the purpose was to call on demonic energies and use their power to affect the practitioners' social standing.
5. Former rationalists clearing their homes of bad energy using crystals.
6. Much talk & theorizing on "intention reading," which was something like mind-reading.
7. I personally went through many months of near constant terror at being mentally invaded. S. I personally prayed for hours most nights for months to rid myself of specific "demons" I felt I'd picked up from other members of Leverage.
If this sounds insane, it's because it was. It was a crazy experience unlike any I've ever had. And there are many more weird anecdotes where that came from.