The problem is that if someone is right about an existential disaster caused by AI, by the time they're proven right it would be too late.
Frontier AI models get smarter every year, humans but humans don't get any smarter year over year. If you don't believe that somehow AI will just suddenly stop getting better (which is as much a faith-based gamble as assuming some rapturous outcome for AI by default), then you'd have to assume that at some point AI will surpass human intelligence in all fields, and the keep going. In that case human minds and overall will will be onconsequential compared to that of AI.
Frontier AI models get evaluated for safety precisely to avert the "AI robot uprising causes an existential disaster" scenario. At the moment we are light years away from anything like that ever happening, and that's after we literally tried our best to LARP that very scenario into existence with things like moltbook and OpenClaw.