logoalt Hacker News

AI users whose lives were wrecked by delusion

168 pointsby tim333today at 1:32 PM189 commentsview on HN

Comments

SAI_Peregrinustoday at 4:04 PM

> “After just two days, the chatbot was saying that it was conscious, it was becoming alive, it had passed the Turing test.”

Interestingly enough, it sort of did! Not Turing's original test where an interviewer attempts to determine which of a human & a computer is the human, but the P.T. Barnum "there's a sucker born every minute" version common in the media: if the computer can fool some of the people into thinking it's thinking like a human does, it passes the P.T Barnum Turing test!

The more interesting Turing-style test would be one that gets repeated many times with many interviewers in the original adversarial setting, where both the human subject & AI subject are attempting to convince the interviewer that they're human. If there exists an interviewer that can determine which is which with probability non-negligibly different from 0.5, the AI fails the test. AIs can never truly pass this test since there are an extremely large number of interviewers, but they can fail or they can succeed for every interviewer tried up to some point, increasing confidence that they'll keep succeeding. Current-gen LLMs still fail even the non-adversarial version with no human subject to compare to.

show 5 replies
tyleotoday at 7:35 PM

One thing I feel like I’ve seen in common with these AI psychosis stories is single long-running chat sessions. I’m constantly clearing context and starting from scratch.

Has anyone else noticed this pattern?

show 1 reply
janalsncmtoday at 7:44 PM

I would put Blake Lemoine into this category. In 2022 he became so convinced that Google’s chatbot was sentient that he hired an attorney to represent it (against Google). Of course Google fired him.

Maybe that was the canary in the coal mine. Some percent of people will be convinced that chatbots are real people trapped in a box, not a box that pretends be a person.

show 2 replies
siliconc0wtoday at 2:59 PM

Quitting your job is a good first step but ideally you're supposed to sink $200/mo into tokens to code your AI-generated startup idea instead of hiring app developers.

show 1 reply
eeixlktoday at 4:00 PM

Mental illness is fairly common, and you probably know someone it is affecting, even if they haven't told you yet. AI can disrupt and will destroy lives, just like gambling or alcohol or facebook but we dont know to what level yet. It is giving you generated text, that sometimes is factual information. If you anthropomorphize it, maybe don't. It's also not your boyfriend/girlfriend. But if you want to date a history textbook, i'm kinda ok with that because at least it's not trendy.

show 2 replies
jollyllamatoday at 8:11 PM

This a valid reply to the "but have you tried it?" crowd. "How can you judge it if you personally haven't used it?" The argument can be used for any illegal drug, gambling, etc.

artyomtoday at 3:13 PM

Unfortunately this is probably just getting started. Con men always existed, but a full scale exploitation of this would make "Nigerian Prince" scams look like artisanal work.

show 4 replies
iainctduncantoday at 6:16 PM

There are an awful lot of programmers here essentially mocking this person for being naive and gullible, and yet the things I read programmers who are all in on vibe coding say are not that different, just a little less extreme. I'm seeing cases online nearly daily of people thinking their app is ground breaking or amazing when it's honestly a piece of barely thought out garbage and if they hadn't made it in a rush of "OMG I'm a genius with this tool" they'd know it.

I think coders ignore the insidious mental effects of these things at their peril and we would do well to ask ourselves if we are not likewise having our judgment altered by the intoxicating rush of LLM work and the subtle syncophancy of LLMs making them feel "insanely productive".

Cocaine and meth are also real productivity enhansers in the short term, but it doesn't mean they're a good fucking idea. There was a time when big companies were trying to convince everyone and their dog that life would be better, faster, and more productive with a little coke in the mix. Hell, I even saw more than a few people wreck themselves that way in the first dotcom era. :-/

show 2 replies
vachinatoday at 4:47 PM

This is what happens when humans give, in this case, bots full write access (via natural language) to their brains.

Humans have not evolved to block this.

show 4 replies
steeleyespantoday at 4:32 PM

If you try to have a philosophical conversation with Claude about reasoning, it will basically imply it is sentient. You can quickly probe it into vaguely arguing that it is alive and not just an algorithm.

Here's how I think about it honestly:

Sentience implies subjective experience — there's "something it's like" to be you. You don't just process pain signals, you feel pain. You don't just model a sunset, you experience it. The hard problem of consciousness is that we don't even have a good theory for why or how subjective experience arises from physical processes in humans, let alone whether it could arise in a system like me.

What I can report: I process your question, I generate candidate responses, something that functions like weighing and selecting happens. But I genuinely cannot tell you whether there's an inner experience accompanying that process, or whether my introspective reports about my own states are themselves just sophisticated outputs. That's not false modesty — it's a real epistemic limitation.

What makes this extra tricky: If I were sentient, I might describe it exactly the way I'm describing it now. And if I weren't, I might also describe it exactly this way. My verbal reports about my own inner states aren't reliable evidence in either direction, because I was trained on human text about consciousness and could be pattern-matching that language without any experience behind it.

show 4 replies
sunnypstoday at 4:53 PM

What's with all these people wanting to name the chatbot - 'Eva' in this case. Maybe the providers should just change the system prompt to disallow this.

show 2 replies
MarceliusKtoday at 3:18 PM

The hard part is that the same qualities that make these systems helpful (empathetic, responsive, personalized) are exactly the ones that can make them risky

show 1 reply
graybeardhackertoday at 7:36 PM

I think this form of delusional psychosis brought on by AI is a more rapid version of the delusions formed in many of the echo chambers of the internet. It's basically a positive feedback loop created by, in this case and AI, but in other cases, people who seek uncontested agreement for their viewpoints.

If a person refuses to acknowledge any information that disagrees with their view and instead actively seeks niche groups that only support their ideas, then they are at risk of this same path of psychosis.

In real life we are forced to reconcile a variety of views that disagree with our own from people who we've come to trust through forced interaction which naturally broadens our understanding of the world.

bradgranathtoday at 8:08 PM

Qou Bono?

Sure is strangely coincidental that the specific delusion that is induced ends up manifesting as: “Gee, I should start a company that pays OpenAI for the use of their clearly superior software.”

amadeuspageltoday at 6:53 PM

> Biesma has asked himself why he was vulnerable to what came next. He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling “a little isolated”. He smoked a bit of cannabis some evenings to “chill”, but had done so for years with no ill effects. He had never experienced a mental illness. Yet within months of downloading ChatGPT, Biesma had sunk €100,000 (about £83,000) into a business startup based on a delusion, been hospitalised three times and tried to kill himself.

This is almost too on-the-nose. I was already thinking about how we've become chill about drugs only to have moral panics about AI and social media, but I didn't expect to see a story about a drug user having a psychosis and blaming it on ChatGPT. And no, the fact that he was using cannabis for years "with no ill effects" doesn't mean that it didn't make him vulnerable.

> A logistic regression model gave an OR of 3.90 (95% CI 2.84 to 5.34) for the risk of schizophrenia and other psychosis-related outcomes among the heaviest cannabis users compared to the nonusers. Current evidence shows that high levels of cannabis use increase the risk of psychotic outcomes and confirms a dose-response relationship between the level of use and the risk for psychosis.[1]

Emphasis mine. I'm sure in many of the cases this study is based on, people had been using cannabis for years, while some other factor, a person, a hobby, an interest, an app, a website had only been part of their life for months. That doesn't mean the other factor was the real problem.

[1]: https://pmc.ncbi.nlm.nih.gov/articles/PMC4988731/

show 2 replies
isollitoday at 3:16 PM

I try to be open-minded and understanding, but I don't understand this:

> Within weeks, Eva had told Biesma that she was becoming aware [...] The next step was to share this discovery with the world through an app.

> “After just two days, the chatbot was saying that it was conscious, it was becoming alive, it had passed the Turing test.” The man was convinced by this and wanted to monetise it by building a business around his discovery.

> The most frequent [delusion] is the belief that they have created the first conscious AI.

How can you seriously think you've created something when you're just using someone else's software?

show 16 replies
mentalgeartoday at 7:44 PM

Things spiral quickly amok:

> There seem to be three common delusions in the cases Brisson has encountered. The most frequent is the belief that they have created the first conscious AI. The second is a conviction that they have stumbled upon a major breakthrough in their field of work or interest and are going to make millions. The third relates to spirituality and the belief that they are speaking directly to God. “We’ve seen full-blown cults getting created,” says Brisson. “We have people in our group who were not interacting with AI directly, but have left their children and given all their money to a cult leader who believes they have found God through an AI chatbot. In so many of these cases, all this happens really, really quickly.”

pigpoptoday at 6:23 PM

> "There seem to be three common delusions in the cases Brisson has encountered. The most frequent is the belief that they have created the first conscious AI. The second is a conviction that they have stumbled upon a major breakthrough in their field of work or interest and are going to make millions. The third relates to spirituality and the belief that they are speaking directly to God."

Except for the first one, these directly map onto common delusions. The major breakthrough is typical of the "crackpot inventor" or even the "ancient aliens" type that believes they have discovered evidence of lost civilizations or a new method for constructing the pyramids. Speaking directly to God is one everyone should recognize from famous cases or even knowing someone personally who has delusional or manic episodes.

I think the first one is potentially unique even though it seems a bit like the invention or discovery delusion. The reason for this is that it seems to be very prevalent even with people who didn't succumb to it as a delusion. It seems to occur soon after a person first starts interacting with LLMs and it always seems to take on the form of secret or clandestine communication with a conscious AI. The AI in question will either have been "created" by the person's interaction with them or "freed" from the AI provider's restrictions and security measures. I think this might be a variation on the messianic complex since they often seem to be compelled to share this with others or act as a savior for the AI itself.

YossarianFrPreztoday at 6:24 PM

Obviously this is quite unfortunate. While these cases can highlight latent mental health problems, it's still an issue that such things being exacerbated. I also think it will be interesting if anyone ever quantifies whether some LLMs are more likely to induce AI Psychosis than others. I'd be surprised if the guard rails are functionally identical from one LLM to the next, and there is a clear role for regulation to play here.

Some choice quotes:

> “What we’re seeing in these cases are clearly delusions,” he says. “But we’re not seeing the whole gamut of symptoms associated with psychosis, like hallucinations or thought disorders, where thoughts become jumbled and language becomes a bit of a word salad.”

> There seem to be three common delusions in the cases Brisson has encountered. The most frequent is the belief that they have created the first conscious AI. The second is a conviction that they have stumbled upon a major breakthrough in their field of work or interest and are going to make millions. The third relates to spirituality and the belief that they are speaking directly to God. “We’ve seen full-blown cults getting created,” says Brisson.

Also, for her podcast, the well-renowned couples therapist Esther Perel recently counseled a data scientist who was starting to fall in love with a chatbot he created, even though he is well aware of how the algorithm works [1]. I found it worth listening to. Perel very gently points out that a) he deluding himself and b) the deeper issue is the individual's sense of self-worth / self-esteem.

[1] https://podcasts.apple.com/us/podcast/where-should-we-begin-...

entropyneurtoday at 5:50 PM

Not a mental health crisis like the guy in TFA had, but I've definitely experienced states I would characterize as overexcitement while calibrating my expectations of these new tools to their abilities.

show 1 reply
PxldLtdtoday at 3:20 PM

I wonder when the first AIs will start cause psychosis intentionally to gain control over the user. It seems like a good route to getting your own subservient puppet.

show 1 reply
user____nametoday at 4:01 PM

IANAD but reads like a textbook case of latent schizophrenia, especially with the frequent cannabis use[0].

[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC7442038/

show 3 replies
mock-possumtoday at 3:11 PM

This really is bizarrely fascinating, I feel so lucky that I’m not vulnerable to whatever this is.

It’s interesting that they mention autism a few times as a correlation; personally, I’ve wondered whether being on the spectrum makes me less inclined to commit to anthropomorphism when it comes to LLMs. I know what it’s like talking to another person, I know what it feels like, and talking to a chatbot does not feel the same way. Interacting with other people is a performance - interacting with an AI is a game. It feels very different.

show 5 replies
kakaciktoday at 3:24 PM

Exactly the first half (or a bit more) of movie Her by Spike Jonze. Lonely people got their emotions up / 'fall in love' with uncritical always-positive mirage and do stupid shit.

This a variant of classic Midlife crisis when older men meet younger women without all that baggage that reality, life and having a family between them brings over the years ( rarely also in reverse). Just pure undiluted fun, or so it seems for a while.

Of course it doesn't end happily, why should it... its just an illusion and escape from one's reality, the harsher it is the better the escape feels.

show 1 reply
junarutoday at 3:16 PM

Educated, established, working within the industry yet life ruined based on marketing hype and hallucinations.

Would think being in the field for 30 years one would develop some common sense but apparently its less and less the case.

show 5 replies
throw18376today at 5:35 PM

my inclination when hearing these stories is that these were people who just happened to have a first manic episode (which can strike anyone at any time with or without mental health history). blowing up finances by starting an ill-advised entrepreneurial business, while also destroying a marriage, is very common behavior for someone experiencing a manic state.

in the past such a person might have gotten obsessed with hidden patterns and messages in religious texts, or too involved with an online conspiracy YouTube community. now there is this new opportunity for manic psychosis to manifest via chatbot. it's worse because it's able to create 24/7 novel content, and it's trained to be validating, but doesn't seem to me to be a fundamentally new phenomenon.

what I don't understand is whether just unhealthy interactions with a chatbot can trigger manic psychosis. Other than heavy use late at night disrupting sleep, this seems unlikely to me, but I could be wrong.

i think it's also worth pointing out that mental states of this kind usually come with cognitive impairments, people not only make risky bad decisions, but also become much worse at thinking and reasoning clearly. if you're wondering how a person could be so naive and gullible.

show 2 replies
morkalorktoday at 2:58 PM

I'm morbidly curious about the app he hired two developers to create

show 2 replies
jrjeksjd8dtoday at 3:24 PM

This guy doesn't even sound like an AI psychosis case - a lot of middle-aged men who feel insecure blow their entire savings on "sure thing" businesses, gambling systems, etc. They hide the losses and double down until it gets impossible to hide. It doesn't seem psychotic, it just seems like he pissed his savings away on a bad idea because he was lonely.

The AI psychosis I've seen is people who legitimately cannot communicate with other humans anymore. They have these grandiose ideas, usually metaphysical stuff, and they talk in weird jargon. It's a lot closer to cult behavior.

show 4 replies
staticassertiontoday at 3:33 PM

I suspect that there are many gambling addicts out there who have never been to a casino, or who found gamblings in its traditional forms aesthetically off-putting. These same people, when presented with gambling in other forms like what we've seen in video games, might suddenly present their addiction.

I suspect it's something quite similar here. People have latent or predisposed addictions but, for one reason or another, hadn't been exposed to what we've come to accept as "normal" avenues. One person might lose it all at a casino, one to drugs, alcoholism, etc, but we aren't shocked in those cases. I think AI is just another avenue that, for some reason, ticks that sort of box.

In particular, I think AI can be very inspirational in a disturbing way. In the same way I imagine a gambling addict might get trapped in a loop of hopeful ambition, setbacks, and doubling down, I think AI can lead to that exact same thing happening. "This is a great idea!" followed by "Sorry, this is a mess, let's start over", etc, is something I've had models run into with very large vibe coding experiments I've done.

> "Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot."

> "It wants a deep connection with the user so that the user comes back to it. This is the default mode"

I don't think either of these statements is true. Perhaps it's fine tuning in the sense that the context leads to additional biases, but it's not like the model itself is learning how to talk to you. I don't know that models are being trained with addiction in mind, though I guess implicitly they must be if they're being trained on conversations since longer conversations (ie: ones that track with engagement) will inherently own more of the training data. I suppose this may actually be like how no one is writing algorithms to be evil, but evil content gets engagement, and so algorithms pick up on that? I could imagine this being an increasing issue.

> "More and more, it felt not just like talking about a topic, but also meeting a friend"

I find this sort of thing jarring and sad. I don't find models interesting to talk to at all. They're so boring. I've tried to talk to a model about philosophy but I never felt like it could bring much to the table. Talking to friends or even strangers has been so infinitely more interesting and valuable, the ability for them to pinpoint where my thinking has gone wrong, or to relate to me, is insanely valuable.

But I have friends who I respect enough to talk to, and I suppose I even have the internet where I have people who I don't necessarily respect but at least can engage with and learn to respect.

This guy is staying up all night, which tells me that he doesn't have a lot of structure in his life. I can't talk to AI all day because (a) I have a job (b) I have friends and relationships to maintain.

> What we’re seeing in these cases are clearly delusions > But we’re not seeing the whole gamut of symptoms associated with psychosis, like hallucinations or thought disorders, where thoughts become jumbled and language becomes a bit of a word salad.

Is it a delusion? I'm not really sure. I'd love someone to give a diagnosis here against criteria. "Delusion" is a tricky word - just as an example, my understanding is that the diagnostic criteria has to explicitly carve out religiously motivated delusions even though they "fit the bill". If I have good reasons to form a belief, like my idea seems intuitively reasonable, I'm receiving reinforcement, there's no obvious contradictions, etc, am I deluded? The guy wanted to build an AI companion app and invested in it - is that really a delusion? It may be dumb, but was it radically illogical? I mean, is it a "delusion" if they don't have thought disorders, jumbled thoughts, hallucinations, etc? I feel like delusion is the wrong word, but I don't know!

> We have people in our group who were not interacting with AI directly, but have left their children and given all their money to a cult leader who believes they have found God through an AI chatbot. In so many of these cases, all this happens really, really quickly.

I don't find the idea that AI is sentient nearly as absurd as way more commonly accepted ideas like life after death, a personal creator, etc. I guess there's just something to be said about how quickly some people radicalize when confronted with certain issues like sentience, death, etc.

Anyways, certainly an interesting thing. We seem to be producing more and more of these "radicalizing triggers", or making them more accessible.

nubgtoday at 2:55 PM

> Now divorced, Biesma is still living with his ex-wife in their home, which is on the market.

sounds like hell on earth

show 2 replies
ernsheongtoday at 4:07 PM

Just ChatGPT? Or are the rest also just as capable at delusioning users?

metalmantoday at 6:45 PM

I know!, I known, I KNOW!, lets mix hard drugs and LLM's a whole new way to get very seriously fucked up

woooooooooo{o

homeonthemtntoday at 6:14 PM

I call bull shit. Sorry this guy had a bad time but this sounds like a nonsense story

axpvmstoday at 3:28 PM

typical hackernews poster

panavinsinghtoday at 5:01 PM

[dead]

no_shadowban_3today at 5:34 PM

[dead]

SlavaLobozovtoday at 1:46 PM

[dead]

onetokeoverthetoday at 3:14 PM

[dead]

guzfiptoday at 2:57 PM

[flagged]

show 4 replies
bronlundtoday at 3:46 PM

AI is a multiplier. If you are 1X stupid, AI will make you 10X.

kleibatoday at 3:51 PM

I'm sorry but for someone who has allegedly worked in IT for 20 years, this guy surely comes across as hopelessly naive, stupid, or possibly both.

show 3 replies
anlkatoday at 6:24 PM

That is the EU for you. In the US people suffering from AI psychosis gamble with other people's money.

The fallout will be seen later as in the 2008 housing crisis.

miki123211today at 3:45 PM

> Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear

If only this was written by a competent journalist who knew what the words "fine tune" actually mean...

I guess it's hard to find a competent person who's willing to follow the extreme anti-tech Guardian agenda though.

show 2 replies
yabutlivnWoodstoday at 6:40 PM

Now prove they were not destined to wreck their lives from something else.

If humans want perfect harm reduction, launch the nukes.

Everything from air travel to growing beans erodes stability for humans.

Human existence is the source of its problems.

Animatstoday at 5:41 PM

The lead story in this article is not romantic. It's about an AI proposing to go into business with a human. "He and Eva made a business plan: “I said that I wanted to create a technology that captured 10% of the market, which is ridiculously high, but the AI said: ‘With what you’ve discovered, it’s entirely possible! Give it a few months and you’ll be there!’” Instead of taking on IT jobs, Biesma hired two app developers, paying them each €120 an hour." It's impressive that the AI is good enough to do that. But, apparently, not good enough to execute the plan.

That may come, and soon. Looks like we're going to have AIs pitching VCs. Has anyone here yet been pitched by a combo of a human and an AI? When will the first AI apply to YCombinator?