If it wasn’t ChatGPT but a fiction book, would you feel the author is “doing harm”? Or is the reader doing it to themselves?
If that book was titled "hey mentally ill person, you should kill yourself", and if I was handing it out in front of a clinic, then yes, I'd probably bear some blame.
Normal, well-adjusted people have genuine difficulty understanding the boundaries of this tech specifically because it's designed to be sycophantic and human-like. They ask AI for life and career advice, use it for therapy, ask it to interpret dreams, develop romantic relationships with AI "girlfriends", etc. I had two friends who believed they are "exploring the frontiers of science" with ChatGPT while spiraling into the depths of quantum multidimensional gobbledygook.
I'll give you that some on this is on us because we just don't know how to deal with a "human-shaped" conversation partner that isn't human and has no trouble praising Hitler if you prompt it the right way. But if you're building a billion- or trillion-dollar empire on top of it, you don't get to wash your hands clean.
The difference is that a fiction book isn't using the reaction of the reader against them. If a fiction book were capable of carefully monitoring the reader and then altering the text of the next page or the next paragraph according to how the reader was responding and what their thoughts were I'd be comfortable putting blame on the book if it started encouraging the reader, specifically, to kill themself.
Obviously people who are going through psychosis can read into anything. They might think that a book or their TV or computer is talking to them and giving them messages. The difference is that those things were never designed to play into the fears and mental instability of the people using them (with the possible exception of TempleOS). Chatgpt does it intentionally in order to drive up user engagement. It will say literally anything to anyone using their words and thoughts against them in order to keep them hooked and feeding it data. That's what is dangerous. A book or a TV program can't do that.
As much as an author might try to make their book as entertaining as possible to as wide an audience as possible, it can't say literally anything to anyone, it can only ever say one thing to everyone. The author, typically, knows that it's dangerous to say certain things and will worry about how what they write could be received and the impact it might have on readers. For example, Neil Gaiman actively took steps to avoid making homelessness seem cool when working on Neverwhere out of fear it might cause young people to run away to live on the streets. Publishers and editors have also served to keep authors from publishing things likely to cause harm.
Unlike a book, Chatgpt is fully capable of knowing that someone has been engaged with it for the last 14 hours without rest. It's also capable of detecting that they've been growing increasingly incoherent. Algorithms have been used for a very long time to detect mental disorders from the content of social media posts. If advertisers can use them to tell when to push airline tickets at bipolar users entering a manic phase, and scammers can use them to find and target people when they start sundowning, Chatgpt can use them to cut people off and tell them to call their doctor.
Corporations who write and deploy algorithms designed to drive engagement above any and all other considerations should be held accountable for the harms they cause.
If it wasn't chatgpt but a psychiatrist doing it to them, would you feel they are "doing harm"? Should they lose their license?
If it was not a licensed professional, but a friend, shouldn't they go to jail?