For a while there were a lot of posts from people experimenting with ChatGPT to write anger bait posts on Reddit where they would later edit the post to say it was fake, written by ChatGPT.
I assume they thought they'd be teaching people a lesson by making them feel foolish for responding to AI stories, most of which were too fake to be believable.
However it did not matter. The posts remained popular and continued to bring in comments even after the admission that they were fake. In advice subreddits, commenters continue to give advice on the situation. Some comments would say they saw the notice that it was fake but continue arguing about it anyway.
This makes a feature of Reddit very clear: The truthiness of a post doesn't matter. The active commenter base on popular subreddits just wants something to discuss and, usually, be angry about.
In retrospect it's obvious given that misinfo posts were the easiest way to karma farm for years even before AI.
>However it did not matter. The posts remained popular and continued to bring in comments even after the admission that they were fake
That's 90% of current Facebook pages and groups.
Even without AI slop I've noticed this happen on Reddit.
I once made a rather boisterously-argued comment on a political issue I'm passionate about, and I realised that I'd made a serious error of reading comprehension when it came to my opponent's argument. I apologised to them for being an abrasive arse over my own mistake, then edited my comment to say that I was mistaken.
My incorrect comment which literally said at the bottom it was incorrect continued to be upvoted while my opponent who had made the stronger argument continued to be downvoted.
We do precisely the same thing here. Here's a relatively recent post that, to me, seems obviously LLM-written. It just rattles off some management platitudes:
https://news.ycombinator.com/item?id=47913650
It had 639 comments and 866 upvotes. And that's not a one-off.