No one is talking about a zero tolerance approach.
Sure, Open AI is trying to do the best they can. That “best” is within Tech’s operating context.
Tech as a whole avoids this issue because paying for the externalities they cause would end hyper growth and crater their margins.
Tech workers at these firms regularly throw up red flags, which have to be ignored because engaging with them results in hits to their quarterly numbers.
Anthropic is the one firm that is actively managing to make safety less of a cost center by folding it into marketing.
>> If you want to find out if ChatGPT is doing something wrong, there are many methodologies available: compare to other groups of people, statistical studies, etc.
These studies must over come sizable barriers that NDAs and tech secrecy throw up. Tech firms have done enough internal studies to know that the results are horrible when they do get into the press.
Most users in the developed world don’t even know that they enjoy better support and care than the rest of the world.