It's just that, in my (uninformed) opinion, Anthropic is incentivized a priori to claim things like this about their models. Like, it's probably really good marketing to say "our product is so smart, and we're so concerned about ethics, that made sure a psychiatrist talked to it". I guess it's ultimately a judgment call, but to me the conflict of interest seems big enough that I'm really wary of this sort of argument. (I'm reminded of when OpenAI claimed GPT-5(?) was "PhD-level"—I can personally attest that, at least in my field, this is totally inaccurate.)