Researches at top AI labs don't consider EY to be a kook even though they may not necessarily agree. EY concepts/terminology appear in Anthropic safety papers. Geoffrey Hinton takes him quite seriously and mentions him in his interviews.
Anthropic is the AI doomer / safetyism lab, and Hinton is one of the patron saints of 'rationalist' AI doomerism.
AI doomerism is psychologically attractive to "people with autistic cognitive traits, including dichotomous (black-and-white) thinking, intolerance of uncertainty, and a tendency toward catastrophizing". They are pascal's mugging themselves, to ironically use one of their terms. It's fundamentally a cognitive distortion.
Just because some researchers are infected with this idiocy that EY propagates does not mean that it is legit.
Maybe they should pay more attention to real problems like the sycophantic nature of current LLMs causing psychosis in people and worry less about theoretical AGI.
Researchers at top AI labs also have the incentive to say whatever shit it will take to get their lab funded, reason be damned.
And people working on the metaverse endlessly referenced Ready Player One despite it being ludicrous fiction.
Yudkowsky is obviously read a lot by some people working in AI. That doesn't make his ideas prescient.