Obviously this is quite unfortunate. While these cases can highlight latent mental health problems, it's still an issue that such things being exacerbated. I also think it will be interesting if anyone ever quantifies whether some LLMs are more likely to induce AI Psychosis than others. I'd be surprised if the guard rails are functionally identical from one LLM to the next, and there is a clear role for regulation to play here.
Some choice quotes:
> “What we’re seeing in these cases are clearly delusions,” he says. “But we’re not seeing the whole gamut of symptoms associated with psychosis, like hallucinations or thought disorders, where thoughts become jumbled and language becomes a bit of a word salad.”
> There seem to be three common delusions in the cases Brisson has encountered. The most frequent is the belief that they have created the first conscious AI. The second is a conviction that they have stumbled upon a major breakthrough in their field of work or interest and are going to make millions. The third relates to spirituality and the belief that they are speaking directly to God. “We’ve seen full-blown cults getting created,” says Brisson.
Also, for her podcast, the well-renowned couples therapist Esther Perel recently counseled a data scientist who was starting to fall in love with a chatbot he created, even though he is well aware of how the algorithm works [1]. I found it worth listening to. Perel very gently points out that a) he deluding himself and b) the deeper issue is the individual's sense of self-worth / self-esteem.
[1] https://podcasts.apple.com/us/podcast/where-should-we-begin-...