I'm reminded of a comic about global warming, "What if it's a big hoax and we create a better world for nothing?": https://climateactionreserve.org/blog/2012/08/31/environment...
"What if AI doom is all fear-mongering, and we create AI less prone to make up dangerous stuff or mistake buggy goals for real ones" (which is what alignment is) "for nothing?"
Even if Yudkowsky is autistic, you're muddling the condition. Autistic people have a *practical* intolerance of uncertainty in the moment (everything unexpected from a noise to a missed turn can be a jump-scare in their day-to-day activities), but often they're absolutely fine with intellectual uncertainty, unconventional ideas, abstract ambiguity, nonconformity, etc. Indeed, one of Yudkowsky's main things is Bayesianism, i.e. being precise about uncertainty.
Yudkowsky's reported P(doom) is somewhere around 90%, which is very much in the realm of "we might eventually be able to figure this out, *but we're not even close to ready so for the love of everything slow down so we can figure this all out*"; the book title comes from a long tradition of authors noticing you need to beat readers over the head with your point for them to notice it.
Anthropic (like at least also OpenAI), appears to think they can solve the problems that Yudkowsky has found. They're a lot more optimistic than him, but they take these problems seriously.
For his work on AI, Hinton got a Nobel prize in Physics, a Turing Award, the inaugural Rumelhart Prize, a Princess of Asturias Award, a VinFuture Prize, and a Queen Elizabeth Prize for Engineering. Calling him a "patron saint" of "doomerism" is like calling Paul Krugman (Nobel laureate in Economics) a patron saint of "Trump Derangement Syndrome" on the basis of what he says in his YouTube channel: a smart person's considered opinions are worth listening to even if you have not got time for the details, because you can be sure someone else has considered the details and will absolutely be responding to even an i missing a dot.
A Pascal's mugging would be more like S-risk (S stands for suffering) than doom risk: https://en.wikipedia.org/wiki/Risk_of_astronomical_suffering
The problem is that effort spent to reduce the "risk" of creating an evil god who tortures us all for the rest of time doesn't actually produce outcomes that reduces the risk of things like widespread job loss or hyperaggregation of influence and money.
"Oh we'll at least get some side benefit" is not actually what is coming out of the endlessly circular forums talking about the apocalypse.
> I'm reminded of a comic about global warming, "What if it's a big hoax and we create a better world for nothing?": https://climateactionreserve.org/blog/2012/08/31/environment...
The people who've made the biggest contribution to creating a better world over the last 50 years have been the Chinese; powered largely by coal and petroleum. And in one of the most ironic results in the 21st century, they're now the leaders in solar panel production on the back of the largest investment in fossil fuel energy in global history.
The comic ran into the same problem as the climate change movement in general - they proposed ideas that generally made people worse off. And if measured in terms of CO2 emissions achieved nothing except pushing wealth creation to Asia. Which, in fairness, is probably appreciated by the Asians.
Much like a lot of LLM usage burns tokens so that mediocre people can hallucinate that they're doing something brilliant, Yudkowskyism is just a lot of empty verbiage for the purpose of building a sex cult around a plump gnome. Reusing his nonsensical and poorly defined terms but failing to get the benefit of the sex cult really misses the point of the entire exercise.