HACKER Q&A
📣 uejfiweun

Is Eliezer Yudkowsky a genius playing the long game?


To those who are unaware, Yudkowsky recently published an article calling for the banning of any ML training and the use of force by the international community to forcibly stop anyone who disobeys, even saying WW3 would be preferable to more ML training.

At first glance Yudkowski's recommendations seem pretty hyperbolic. But now I'm wondering if this guy is actually taking the "long view" and will end up being remembered as one of the great thinkers.

The AI/ML revolution is just getting started, and it promises to upend much of human society. I'd say a backlash is inevitable, and will intensify as the tech gets more and more capable. Given that there WILL be a major backlash in the future (far more major than what we see today), I think Yudkowski may have positioned himself as the "grandfather" of a new, budding ideology defined by anti-technology and anti-AI views. Like Smith or Marx before him, Yudkowski makes provocative proclamations about a concept that seems like it will define our coming century. I wonder if what he's said so far is already enough to make him the "face" of an anti-AI movement that in my opinion will likely grow to rival even the most powerful ideologies of history.

EDIT: To clarify, basically what I'm trying to say here is that Yudkowski is being farsighted by being the first one to claim this "shut it all down" position. Many people are going to join this camp in the future and now it may well be known as "Yudkowskiism" or something like that.


  👤 upwardbound Accepted Answer ✓
For what it's worth, Eliezer's writings motivated me to start my company Preamble AI. If Preamble makes it big someday then you can credit all of our achievements to Eliezer.

👤 dragonwriter
Yudkowski is recommending a course which (1) would likely end humanity sooner, and (2) wouldn’t succeed in stopping any danger posed by AGI.

But, sure, he is getting out in front of and making himself a figurehead for a likely wave of future violent paranoia in a way which may well make him a movement leader, sort of a cross between the Unabomber and Usama bin Laden.


👤 MH15
> I think Yudkowski may have positioned himself as the "grandfather" of a new, budding ideology defined by anti-technology and anti-AI views.

God, let's hope not.


👤 warrenm
IOW, he's trying to position himself as a modernday Luddite-esque prophet?

If you make enough predicitons, some are [almost] guaranteed to come true :)

I've never heard of this Yudkowsky fella before you mentioned him - which may not mean much (or maybe it does ... I think I'm pretty aware of "big names" in a lot of arenas)

Is there any reason we should listen to him, in particluar, over and above every else pontificating on the short-, medium-, and long-term impacts and implications of ML?

>Given that there WILL be a major backlash in the future (far more major than what we see today)

Upon what do you base this opinion?


👤 reducesuffering
Eliezer has been working on AI theory for 23 years, AI risks for 18. He foresaw what is happening today super long ago while practically everyone else on HN was asleep at the wheel. Call him crazy if you want, he's taking the wrong side of Pascal's wager. But, Sam Altman, who is leading all this, cares enough about his opinions that they have spoken in person. Musk agrees with him. Paul Graham, Andrej Karpathy, and Bezos follow him on Twitter.

👤 lyu07282
My disorganized thoughts on the matter: He either is right and the world is going to end and it's already to late to do anything about it, or he is wrong. Neither are useful propositions, even if everyone agreed with him nothing would change. We don't do anything real about climate catastrophe either and that has majority support in science. I don't see much personal utility in knowing the fact we are all certainly going to die.

And to be a movement leader he would have to be a bit more charismatic, but he lingered in obscurity for so long that I don't really buy it. He had some very stimulating conversations, we are so starved for intelligent conversations but I doubt that will ever have mainstream appeal.

I would reject the idea it's some sort of long-con, he strikes me as a very genuine person that is a strong believer in what he is saying. More like a Richard Stallman type figure, harmless.

I wished he would write more fiction, HPMOR was so good.


👤 catchnear4321
Is “Yuddite” a term?

If not, can it be now?


👤 kypro
I have no clue why more people don't agree with his arguments. I tend not to hold strong opinions in general, but for me the danger of AGI is so obvious I'd argue it's even more of a clear and present threat than nuclear weapons. Literally everything about AGI is bad and there's almost zero incentive not to press the metaphorical red button.

The problem with Yudkowsky is that he's too smart for his own good. We need to stop talking about AI safety like it's a field of philosophy and explain the threat very plainly to people who don't understand how neural nets work.

Those who disagree with him from what I see generally fall into three camps:

1. The 99.5% of the population who are not educated enough to speak on the subject. These are the kinds of people who are so misinformed they think AGI will be good because it's smart or because it's "programmed" by humans.

2. Misaligned actors. Eg, CTOs and CEOs of AI startups. These people have both a financial risk and reputation risk from talking about the dangers of their hobby. However, these people are unfortunately the media's favourite gotos when they want to understand the dangers of the technology.

3. Naive techno optimists. These are the people who don't even bother making counter arguments and instead rely on slandering people like Yudkowsky as doomers or as being anti-progress. They generally think AGI will be good because some incomparable past technology was good.

I've literally never had a good debate on this subject and I've been debating with people about this for over a decade now. It's frustrating to the point of exhaustion trying to explain the risks. Every now and then you come across someone who gets it and that's nice, but they're hard to find. Generally these people are educated and are familiar with ML algorithms despite not having their reputation and income tied to it.

So yeah, if you couldn't tell I think Yudkowsky is a genius and way ahead of most people on this, but by the time humanity realises it it will be too late.


👤 PaulHoule
He's certainly set himself up well for this moment. It's as if L. Ron Hubbard became the #1 public authority on psychology in 1965.

👤 seydor
> the great thinkers.

Alarmist, but not a thinker. I haven't been able to trace his line of thought beyond "you must panic now"


👤 supernode
Here's an account of someone who worked with him. These thought experiments are interesting. You can judge for yourself what they might be after:

To give you a picture of where the culture eventually ended up, here's a list of some things I experienced by the end of my time there:

1. 2-6hr long group debugging sessions in which we as a sub-faction(Alignment Group) would attempt to articulate a "demon" which had infiltrated our psyches from one of the rival groups, its nature and effects, and get it out of our systems using debugging tools.

2. People in my group commenting on a rival group having done "magic" that weekend and clearly having "powered up," and saying we needed to now go debug the effects of being around that powered-up group, which they said was attacking them or otherwise affecting their ability to trust their own perception

3. Accusations being thrown around about people leaving "objects" (similar to demons) in other people, and people trying to sort out where the bad "object" originated and if it had been left intentionally/malevolently or by accident/subconsciously.

4. People doing seances seriously. I was under the impression the purpose was to call on demonic energies and use their power to affect the practitioners' social standing.

5. Former rationalists clearing their homes of bad energy using crystals.

6. Much talk & theorizing on "intention reading," which was something like mind-reading.

7. I personally went through many months of near constant terror at being mentally invaded. S. I personally prayed for hours most nights for months to rid myself of specific "demons" I felt I'd picked up from other members of Leverage.

If this sounds insane, it's because it was. It was a crazy experience unlike any I've ever had. And there are many more weird anecdotes where that came from.