Relevant article from The Atlantic a couple weeks ago, "Friendship, On Demand": https://www.theatlantic.com/family/2026/03/ai-friendship-cha... (gift link)
>The way that generative AI tends to be trained, experts told me, is focused on the individual user and the short term. In one-on-one interactions, humans rate the AI’s responses based on what they prefer, and “humans are not immune to flattery,” as Hansen put it. But designing AI around what users find pleasing in a brief interaction ignores the context many people will use it in: an ongoing exchange. Long-term relationships are about more than seeking just momentary pleasure—they require compromise, effort, and, sometimes, telling hard truths. AI also deals with each user in isolation, ignorant of the broader social web that every person is a part of, which makes a friendship with it more individualistic than one with a human who can converse in a group with you and see you interact with others out in the world.
I also thought this bit was interesting, relative to the way that friendship advice from Reddit and elsewhere has been trending towards self-centeredness (discussed elsewhere in this thread):
>Friendship is particularly vulnerable to the alienating force of hyper-individualism. It is the most voluntary relationship, held together primarily by choice rather than by blood or law. So as people have withdrawn from relationships in favor of time alone, friendship has taken the biggest hit. The idea of obligation, of sacrificing your own interests for the sake of a relationship, tends to be less common in friendship than it is among family or between romantic partners. The extreme ways in which some people talk about friendship these days imply that you should ask not what you can do for your friendship, but rather what your friendship can do for you. Creators on TikTok sing the praises of “low maintenance friendships.” Popular advice in articles, on social media, or even from therapists suggests that if a friendship isn’t “serving you” anymore, then you should end it. “A lot of people are like I want friends, but I want them on my terms,” William Chopik, who runs the Close Relationships Lab at Michigan State University, told me. “There is this weird selfishness about some ways that people make friends.”
Sherry Turkle is a name to know on this subject, she's been studying it for decades across multiple technologies.
She uses the phrase "frictionless relationships" to refer to Ai chat bots and says social media primed us for this.
https://www.youtube.com/live/6C9Gb3rVMTg?t=2127
https://www.npr.org/2025/07/18/g-s1177-78041/what-to-do-when...
Yeah, I asked Gemini some relationship advice, it just goes straight into cut-throat mode. I almost broke up with my girlfriend, but then changed to Claude with another prompt.
Just a reminder: LLMs are statistical models that predict the next token based on preceeding tokens. They have no feelings, goals, relationships, life experience, understanding of the human condition and so on. Treat them accordingly.
Not my experience with Claude. Claude will kick your ass if it detects harmful rationalizations.
Basically will tell you to go outside and touch grass and play pickleball.
Anecdote:
I used to use LLMs for alternate perspectives on personal situations, and for insights on my emotions and thoughts.
I had no qualms, since I could easily disregard the obviously sycophantic output, and focus on the useful perspective.
This stopped one day, till I got a really eerie piece of output. I realized I couldn’t tell if the output was actually self affirming, or simply what I wanted to hear.
That moment, seeing something innocuous but somehow still beyond my ability to gauge as helpful or harmful is going to stick me with for a while.
Not surprising, but nice that we have actual data now
Reddit as the source of truth…
(Using a throwaway for fear of getting downvoted to oblivion)
IMHO it is unfair to single out LLMs for this sort of bashing.
I suffered a major personal crisis a few years back (before LLMs were a thing)
I sought help from family and friends. Got pushed into psychiatrist sessions and meds.
Trusted the wrong sort of people and made crap financial decisions. Things went from bad to worse. Work suffered.
All of the advice given by friends was wrong. All! They didn't mean bad...but they just didn't know. To be nice they gave the advice they knew. None of it worked.
Looking at the LLM tools of now, feels akin to the advice my friends threw at me. So it feels wrong to single out these tools. When the times are bad, nobody can really help you...except you finding the strength from within.
Anyways, now my life is back in some sort of shape. What worked was time & patience.
But to bide for time...I resorted to two things that i had never tried the 40 odd years I have lived on this . Things that current society looks down upon as the basest of evils - prostitutes and nicotine.
I have (more or less) shed those two evils now, but I am ever so grateful to them.
LLMs are syncophatic digital lawyers that will tell you what you want to hear until you look at the price tag and say “how much did I spend?!”
Can't you just prompt for a critical take, multiple alternative perspectives (specifically not yours, after describing your own), etc.?
It's a tool, I can bang my hand on purpose with a hammer, too.
I think if you're at the stage of life where you even need to ask, the AI might be doing everyone a favor.
As much as people whine about the birth rate and whatever else, I think it's a net good that people spend a lot more time alone to mature. Good relationships are underappreciated.
When I ask an LLM to help me decide something, I have to remind myself of the LotR meme where Bilbo asks the AI chat why he shouldn't keep the ring and he receives the classic "You're absolutely right, .." slop response. They always go in the direction you want them to go and their utility is that they make you feel better about the decision you wanted to take yourself.
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
WTF is "yes-men"?
Orignal title:
AI overly affirms users asking for personal advice
Dear mods, can we keep the title neutral please instead of enforcing gender bias?
We can surely fix it and we probably should. However, I don't think AI is doing any worse here than friends advice when they here a one sided story. The only difference being that it's not getting studied.
Conversely, AI chatbots are great mediators if both parties are present in the conversation.
Marc Andereseen has talked about the downside of RLHF: it's a specific group of liberal low income people in California who did the rating, so AI has been leaning their culture.
I think OpenAI tried to diversify at least the location of the raters somewhat, but it's hard to diversify on every level.
This new Stanford study published on March 26, 2026 shows that AI models are sycophantic. They affirm the users position 49% more often than a human would.
The researchers found that when people use AI for relationship advice, they become 25% more convinced they are 'right' and significantly less likely to apologize or repair the connection.