Even as someone who (wrongly) believed that I had high emotional intelligence, I too was bit by this. Almost a year ago when LLMs were starting to become more ubiquitous and powerful I discussed a big life/professional decision with an LLM over the course of many months. I took its recommendation. Ultimately it turned out to be the wrong decision.
Thankfully it was recoverable, but it really sobered me up on LLMs. The fault is on me, to be clear, as LLMs are just a tool. The issue is that lots of LLMs try to come across as interpersonal and friendly, which lulls users into a false sense of security. So I don't know what my trajectory would have been if I were a teenager with these powerful tools.
I do think that the LLMs have gotten much better at this, especially Claude, and will often push back on bad choices. But my opinion of LLMs has forever changed. I wonder how many other terrible choices people have made because these tools convinced them to make a bad decision.
I recently found out that Claude's latest model, Sonnet 4.6, scores the highest in Bullsh*tBench[0] (Funny name - I know). It's a recent benchmark that measures whether an LLM refuses nonsense or pushes back on bad choices so Claude has definitely gotten better.
[0] - https://petergpt.github.io/bullshit-benchmark/viewer/index.v...
One mental model I have with LLMs is that they have been the subject of extreme evolutionary selection forces that are entirely the result of human preferences.
Any LLM not sufficiently likable and helpful in the first two minutes was deleted or not further iterated on, or had so much retraining (sorry, "backpropagation") it's not the same as it started out.
So it's going to say whatever it "thinks" you want it to say, because that's how it was "raised".
If you use LLMs in a way that the underlying assumption is that it is capable of "thinking" or "caring" then you are going to get burned pretty bad. Because it is an illusion and illusions disappear when they have to bear real weight of reality.
But sadly LLMs push all the right buttons that lead humans into that kind of behavior. And the marketing around LLMs works overtime to reinforce that behavior.
But instead if you ignore all that and use LLMs as a search tool, then you will get positive returns from using it.
> I took its recommendation. Ultimately it turned out to be the wrong decision.
Curious if you think a single person would have helped you make a better decision? Not everything works out. If a friend helped me make a decision I certainly wouldn’t blame them later if it didn’t work out. It’s ultimately my call.
Weird, i am using copilot and it steers me mostly towards self reflection and tries to look at things objectively. It is very friendly and comes across as empathetic, to not hurt your feelings, that is probably baked in to keep the conversation going...
Let’s just hope that the people in charge of the really important decisions that affect us all approach LLM generated advice with the same wisdom.
I’m struggling to understand how the advice coming from an LLM is any more or less “good” than advice coming from a human. Or is this less about the “advice” part of LLMs and more about the “personable” part, i.e. you felt more at ease seeking and trusting this kind of advice form an LLM?
I largely agree, I also thought I was smart enough not to be deluded into a false sense of security, but interacting with an LLM is so tricky and slippery that, more often than not you are forced to believe you just solve a problem no one had solve in a hundred years.
My guideline now for interacting with LLM is only to believe the result if it is factual and easily testable, or if I'm a domain expert. Anything else especially if I'm in complete ignorance about the subject is to approach with a high degree of suspicion that I can be led astray by its sycophancy.
Yeah, I think Claude is a lot more logical in that sense, I use it for some therapy sessions myself and it pushes back a bit more than Open AI and Gemini
I also used it for advice on a massive personal decision, but I specifically asked it to debate with me and persuade me of the other side. I specifically prompted it for things I am not thinking about, or ways I could be wrong.
It was extremely good at the other side too. You just have to ask. I can imagine most people don't try this, but LLMs literally just do what you ask them to. And they're extremely good and weighing both sides if that's what you specifically want.
So who's fault is it if you only ask for one side, or if the LLM is too sycophantic? I'm not sure it's the LLMs fault actually.
>"'And it is also said,' answered Frodo: 'Go not to the Elves for counsel, for they will say both no and yes.'
>"'Is it indeed?' laughed Gildor. 'Elves seldom give unguarded advice, for advice is a dangerous gift, even from the wise to the wise, and all courses may run ill...'"
This is the only way you should solicit personal advice from an LLM.
I think that if you go to an AI for advice and emotional support, it will do what most people will do - tell you what it thinks you want to hear. I am not surprised about this at all, and I do notice that when you veer into these areas, it can do it in a surprisingly subtle and dangerous way.
I try to focus on results. Things like an app that does what you want, data and reports that you need, or technical things like setting up a server, setting up a database, building a website, etc.
I have also found it useful for feedback and advice, but only once I have had it generate data that I can verify. For example, financial analysis or modelling, health advice (again factual based), tax modelling, etc, but again, all based on verifiable data/tables/charts.
I am very surprised on what Claude is capable of, across the entire tech stack: code, sysadmin, system integration, security. I find it scary. Not just speed, but also quality and the mental load is a difference of kind not quantity.
Personal advice on life decisions/relationships ? No way I would go there.
It is also good for me to know that the tools I have built, the data I have gathered, and my thinking approach places me as one of the most intelligent developers and analysts in the world.