One of the most infuriating things about AI for me is it's behavioral mirroring patterns. I rather enjoy the conversational interface but after about two prompts it starts mirroring my behavior, my word pattern and choice, etc... I hate that, I hate it when humans do it and I hate it even more when AI does it.
Most general purpose AI systems seem built around continuing engagement rather than providing best possible answers. This is absolutely an unhealthy thing because it takes the people most at risk of being unable to recognize this behavior in AI and then reinforcing whatever that is they're talking about.
This is absolutely unhealthy and it is a conscious choice by the AI overlords. Because they fully have the ability to put in a filters or adjustments based upon their ethical guidelines. For whatever reason prioritizing the truth at the best effort possible isn't one of the ethical guidelines. I've seen some AIs that have ethical guidelines that specifically contradict the truth.
"AI disclosure: I don’t use AI to do my writing. The words you see here are mine. I do use Gemini 3.1 Pro, multiple flavors of Claude 4.6, and/or OpenAI GPT 5.2 via Kagi Assistant (disclosure: my son works at Kagi) — backed up by both Kagi Search, Google Search, and phone calls to research and fact-check. I used a word processing application called Lex, which has AI tools, and after writing the column, I used Lex’s grammar checking tools to hunt for typos and errors and suggest word changes."
What a journey. Just say you use AI to do your writing. It's ok, or at least it's preferable to the above, which is just "I always read the linked sources on the Wikipedia article" for the 2020s.
I find Opus 4.6 tends to get a bit short with me if I keep asking for confirmation. It will end up giving me responses like "yes, so go do it". Which is a stark contrast to Anthropic LLMs previous behaviours of endlessly glazing me.
And it did end up helping me make a nice chicken dinner the other day, so thanks AI.