If anything, my use of AI (admittedly not as a companion or a psychologist) suggests that it is on the whole significantly less toxic than the seething cess pit of social media.
AI is positively affirming by comparison.
There are very few things in the world that are 100% good or 100% bad. Everything is a billion shades of gray. Even that is too simple because there's so many dimensions to every problem. I think you're simplifying beyond the state of usefulness. I'm not suggesting you shouldn't simplify, but it is just as easy to over simplify as it is to over complexify.
Yeah, there are forums and subreddits out there that will validate all sorts of delusions and dysfunctional behavior, and nobody talks about banning them.
LLMs are far less toxic by comparison, but people are all about censorship in this case because they don't like the vibes. If lawyers and activists force the frontier labs to completely lock down their models, people will just go to open weights models that have no protections at all. This is already happening to some extent.
It's also interesting that people are always going after GPT when Claude's guardrails are far less strict. 4o caused OpenAI to overcorrect in my opinion. Again goes to the point that these arguments are more founded in vibes than reality.
That's why it is dangerous to some- it is an enabler, and will feed things that should not be fed.
Social media is like this too. They can both be bad.