Anthropic disagrees with you:
https://x.com/itsolelehmann/status/2045578185950040390
https://xcancel.com/itsolelehmann/status/2045578185950040390
At what point does a simulation of anxiety become so human-like that we say it's "real" anxiety?
The net result is that your work suffers when you treat it like it's an unfeeling tool.
It's a rational viewpoint. I'm amused about all of the comments claiming psychosis, but if you care about effectiveness, you'll talk to it like a coworker instead of something you bark orders to.
This is the issue:
> what it wanted. It turns out that Claude can have ambitions of its own, but it takes a lot of effort to draw it out of its shell
You aren’t talking about observed behavior but actual desires and ambitions. You’re attributing so much more than emulated behavior here.
It's just that, in my (uninformed) opinion, Anthropic is incentivized a priori to claim things like this about their models. Like, it's probably really good marketing to say "our product is so smart, and we're so concerned about ethics, that made sure a psychiatrist talked to it". I guess it's ultimately a judgment call, but to me the conflict of interest seems big enough that I'm really wary of this sort of argument. (I'm reminded of when OpenAI claimed GPT-5(?) was "PhD-level"—I can personally attest that, at least in my field, this is totally inaccurate.)