Maybe it's not so sensible to offload the responsibility of clear thinking to AI companies?
How is a chatbot supposed to determine when a user fools even themselves about what they have experienced?
What 'tough love' can be given to one who, having been so unreasonable throughout their lives - as to always invite scorn and retort from all humans alike - is happy to interpret engagement at all as a sign of approval?
> clear thinking
Most humans working in tech lack this particular attribute, let alone tools driven by token-similarity (and not actual 'thinking').
> Maybe it's not so sensible to offload the responsibility of clear thinking to AI companies?
Markets don't optimize for what is sensible, they optimize for what is profitable.
It's almost as if being a therapist is an actual job that takes years of training and experience!
AI may one day rewrite Windows but it will never be counselor Troi.
> How is a chatbot supposed to determine when a user fools even themselves about what they have experienced?
And even if it _could_, note, from the article:
> Overall, the participants deemed sycophantic responses more trustworthy and indicated they were more likely to return to the sycophant AI for similar questions, the researchers found.
The vendors have a perverse incentive here; even if they _could_ fix it, they'd lose money by doing so.