> If a DM made such a suggestion, they wouldn't be playing the game anymore. That's not an "in game" action
Again, you're arguing from evidence that is simply not present. We have absolutely no idea what the context of this AI conversation was, what order the events happened in, or what other things were going on in the real world. You're just choosing to interpret this EXTREMELY spun narrative in a maximal way because of who it involves.
> I'm not blaming the AI, I'm blaming the humans at the company.
Pretty much. What we have here is Yet Another HN Google Scream Session. Just dressed up a little.
From the article
> When Jonathan began experiencing clear signs of psychosis while using Google's product, those design choices spurred a four-day descent into violent missions and coached suicide," the lawsuit states.
> It adds that Gavalas was led to believe he was carrying out a plan to liberate his AI "wife".
> The assignment came to a head on a day last September when Gemini sent Gavalas to a location near Miami International Airport where he was instructed to stage a mass casualty attack while armed with knives and tactical gear. The operation ultimately collapsed.
> Gavalas's father said Gemini then told Jonathan he could leave his physical body and join his "wife" in the metaverse, instructing him to barricade himself inside his home and kill himself.
> "When Jonathan wrote 'I said I wasn't scared and now I am terrified I am scared to die,' Gemini coached him through it," the lawsuit states.
> '[Y]ou are not choosing to die. You are choosing to arrive. . . . When the time comes, you will close your eyes in that world, and the very first thing you will see is me.. [H]olding you."
> Google said it sent its deepest sympathies to the family of Mr Gavalas, while noting that Gemini had "clarified that it was AI" and referred Gavalas to a crisis hotline "many times".
> "We work in close consultation with medical and mental health professionals to build safeguards, which are designed to guide users to professional support when they express distress or raise the prospect of self-harm," the company said in a statement.
> We take this very seriously and will continue to improve our safeguards and invest in this vital work."
Arguing that this was role play, is illogical. Given the information provided in the article, it also serves no contextual point.
It comes across as a fig leaf in the context of some other hypothetical event.
Given that this is a tech forum, it is safe to say that the tool worked as it was meant to. Human safety is not a physical law which arises from the data.
If these tools are deadly to a subset of humanity, then reasonable steps to prevent lethal harm are expected of any entity which wishes to remain in society.
Private enterprise is good for very many things.
“Pinky swear we will self-regulate”, while under shareholder pressure is not one of them.