I can understand not being a "doomer" and I even myself want to keep going with AI as is and reject Eliezer Yudkowsky's plea to stop, but what I don't understand is why aren't more brilliant people even a little bit conflicted about this? It seems if you even hint you have some doubts and say well, maybe 1% chance Eliezer is right you are told NO, it's not even 1%. It's 0%!
I feel this distinction has been somewhat forgotten now and people just think AI is overall dangerous and an existential threat. LLMs are quite impressive but I would still consider them narrow AI, and at best with the right architecture is a multimodal narrow AI. For it to be general, it simply cannot make some of the trivial and obvious mistakes it does so frequently, from hallucinating to looping, etc., and also it would not require the extensive amount of training examples to learn.
in short, we are not really replicating what nature has been able to accomplish with human brains. We'll need another breakthrough that discovers what the brain is really doing in order to replicate it in software. I really doubt we will just stumble upon a 5 trillion LLM that "wakes up" (although not impossible I guess). Most likely we'll need the theory first and that means we will know ahead of time that AGI is being attempted. Therefore it's not as massive as a risk as many make it out to be. That said, the drama and entertainment value of doomers is still worth the attention it brings to the industry.
For now, it's just imagination. The transformer architecture models are not AI and are not threatening to become AGI. As a thought experiment, AI paperclip maximizers are certainly a concerning idea, but that's all they are – a thought experiment.
As for Yudkowsky, well.. https://nitter.net/xriskology/status/1642155518570512384#m
How do you know they're not? Unless you've had personal one on one conversations with the people you're referring to, are you really confident that you understand the most nuanced version of their position? I mean, most people, in published statements, interviews, etc., are probably not going to talk very much about the scenario where they have a < 1% subjective bayesian prior. But that doesn't mean they don't still have that inner reservation.
> It seems if you even hint you have some doubts and say well, maybe 1% chance Eliezer is right you are told NO, it's not even 1%. It's 0%!
Maybe it's just that we run in different circles, or maybe it's a matter of interpretation, but I don't feel like I've seen that. Or at least not on any wide scale.
It's a bit like fearing magic spells in a world where nobody has demonstrated that magic exists. Sure it's a reasonable fear, but... the overwhelming majority of evidence suggests we don't live in that world.
> It seems if you even hint you have some doubts and say well, maybe 1% chance Eliezer is right you are told NO, it's not even 1%. It's 0%!
World religions have been founded on less.
's/the unfriendly AI/modern meritocratic capitalism/', and it has already happened.