This is touched upon in the article:
> Last year, OpenAI released estimates on the number of ChatGPT users who exhibit possible signs of mental health emergencies, including mania, psychosis or suicidal thoughts.
> The company said that around 0.07% of ChatGPT users active in a given week exhibited such signs.
0.07% doesn't sound like much, but ChatGPT has about a billion WAU, which means -seventy million- 700,000 people per week.
That number terrifies me not because it is so high, but because it exists.
What is stopping an entity (corporate, government, or otherwise) from using a prompt to make sweeping decisions about whether people are mentally or otherwise "fit" for something based on AI usage? Clearly not the technology.
I'm not saying mental health problems don't exist, but using AI to compute it freaks me out.
Is that different to the number of people who have that going on in their life even without AI though? If it's 0.01% outside of AI, and 0.07% of AI users, then either AI attracts people with those conditions or AI increases the likelihood of having them. That's worth studying.
It's also possible that 0.1% of people have them and AI is actually reducing the number of cases...