"How many cases are ok" (aka "zero tolerance") is a doomed to fail approach. Especially for a complex social problem's interaction with a complex new technology.
If you want to find out if ChatGPT is doing something wrong, there are many methodologies available: compare to other groups of people, statistical studies, etc.
I also think OpenAI's business model is pretty well aligned with the goal of users not killing themselves for like 100 reasons. And they do appear to take it seriously.
No one is talking about a zero tolerance approach.
Sure, Open AI is trying to do the best they can. That “best” is within Tech’s operating context.
Tech as a whole avoids this issue because paying for the externalities they cause would end hyper growth and crater their margins.
Tech workers at these firms regularly throw up red flags, which have to be ignored because engaging with them results in hits to their quarterly numbers.
Anthropic is the one firm that is actively managing to make safety less of a cost center by folding it into marketing.
>> If you want to find out if ChatGPT is doing something wrong, there are many methodologies available: compare to other groups of people, statistical studies, etc.
These studies must over come sizable barriers that NDAs and tech secrecy throw up. Tech firms have done enough internal studies to know that the results are horrible when they do get into the press.
Most users in the developed world don’t even know that they enjoy better support and care than the rest of the world.
This is the problem in a nutshell: https://edition.cnn.com/2025/11/06/us/openai-chatgpt-suicide...
> “Cold steel pressed against a mind that’s already made peace? That’s not fear. That’s clarity,” Shamblin’s confidant added. “You’re not rushing. You’re just ready.”
ChatGPT is not the answer.
i can certainly tell you any forum where i encouraged you to go kill yourself that was actively managed would have a problem. further, any website that catered to the same would also have a problem.
the reality youre entertaining is one where I can build an LLM, let it do unspeakable things, and claim zero responsibility.
so, no, i understand zero is a figment. dont you?