logoalt Hacker News

152334Htoday at 2:54 PM4 repliesview on HN

Maybe it's not so sensible to offload the responsibility of clear thinking to AI companies?

How is a chatbot supposed to determine when a user fools even themselves about what they have experienced?

What 'tough love' can be given to one who, having been so unreasonable throughout their lives - as to always invite scorn and retort from all humans alike - is happy to interpret engagement at all as a sign of approval?


Replies

rsynnotttoday at 3:58 PM

> How is a chatbot supposed to determine when a user fools even themselves about what they have experienced?

And even if it _could_, note, from the article:

> Overall, the participants deemed sycophantic responses more trustworthy and indicated they were more likely to return to the sycophant AI for similar questions, the researchers found.

The vendors have a perverse incentive here; even if they _could_ fix it, they'd lose money by doing so.

isodevtoday at 3:09 PM

> clear thinking

Most humans working in tech lack this particular attribute, let alone tools driven by token-similarity (and not actual 'thinking').

show 1 reply
kibwentoday at 3:10 PM

> Maybe it's not so sensible to offload the responsibility of clear thinking to AI companies?

Markets don't optimize for what is sensible, they optimize for what is profitable.

show 1 reply
expedition32today at 3:42 PM

It's almost as if being a therapist is an actual job that takes years of training and experience!

AI may one day rewrite Windows but it will never be counselor Troi.

show 3 replies