Serious question: If there are so many LLMs on online forums, who is doing it? Is it just 1000s of research students or something more nefarious? Is it AI businesses building up evidence that their output is as highly scored as humans therefore "buy our software"?
Established accounts are worth money, often for scamming/propaganda.
Not too dissimilar to people bot-leveling in MMOs to the sell the accounts.
It's very common for folks to search Reddit to find reviews of products etc. these days. If you can have a bot account post a fake review of how awesome your product us, and have that upvoted, it can pay huge dividends.
I've noticed 4 categories of inauthentic users. Ranked by my perceived prevalence:
Account farmers: these can be people in 3rd world countries automated/not automated. Can be using hundreds of mobile phones to create accounts and do daily activity to make the account look legitimate. While they're building an activity history they are also being paid to like/follow/interact with content.
Advertisers: these are brought accounts that are used to pose inauthentic reviews of their service and inject it into discussion and to do PR
Sloppers: people who build AI pipelines and then just pump the most dogshit content directly into a platform trying to make any amount of money.
Nation State propaganda arms: These accounts build a narrative character and then join discussion pushing a certain narrative, boost real content creators who share their message and bog down discussion.
People like the above poster who are "just running an experiment" or "trying something for fun" who then wonder why online communities are full of AI now.
In the case of Reddit and HN a lot of it is done by businesses either blatantly advertising themselves or building up the karma they need to effectively do so. I recall reading obviously AI generated replies to news articles written by accounts associated with businesses related to the events in the news. This isn't new in the LLM era. Hobby subreddits are well known to be always full of businesses selling hobby gears and items doing self promotion. It's just that now it is a lot more obvious because of the AI text smell.
That, and probably political astroturfing. Before every election my local subreddit sees a surge of crime stories. Go figure.
I think some of it is account farming, but some is just people buying wholesale into the idea that if you're not using AI for everything, you're gonna be left behind. On the Kagi Small Web list, there's plenty of hobby blogs that used to be normal pre-2023 and are now obviously LLM-written and AI-illustrated. There's also plenty of people on LinkedIn who post AI slop because they think it helps them build a "professional brand". I even have some distant friends who are using AI for responding to friend & family posts on Facebook just because it makes you seem... smart? engaged? I don't know.
It's actively encouraged by some of the platforms too. In Gmail and Google Docs, you have incessant AI prompts along the lines of "help me write this". I think LinkedIn does the same.
HN has historically been gamed for visibility. The stakes for doing this can be quite high if you can pull it off.
Lots of marketing. Not even AI business, just regular consumer crap. They realized that blatantly spamming their product looks bad, so they orchestrate multiple accounts to look more organic. And people actually engage with it.
My impression is that they're sometimes unemployed people or students hoping to create a popular open source project, and use it to find a job.
They aren't going to care about any of the advice in the article about not posting slop -- finding a job is (of course?) more important to them.
Can't really say they are doing anything wrong, maybe I too would have? ... Just that large scale, doesn't work
There are many reasons for influence campaigns, that isn't new. Influencing the public is incredible valuable; that's why so many invest so much in it. LLMs automate it like never before.
Plain advertising, governments' propaganda, political propaganda for one group or another to shift public opinion (it's done on TV networks, why would they not do online campaigns?), astroturfing by corporations promoting acceptance or fighting negative news (e.g. rideshare, AI, whatever certain wealthy personalities are doing) ... the list goes on.
HN has always been relatively influential in the tech industry and therefore worth influencing, and now the cost is very cheap - you don't even need to hire many people, so less-resourced operators will find it worthwhile (and they will also attack lower-value forums).
If you farm a fleet of good accounts, you control the discourse. On HN, you could boost whatever you're trying to push, and downvote or flagkill whoever objects.
There are obvious benefits to controlling public discourse, right? Even if it's just to support some project you're working on.
We're in the middle of an active cold war where countries are trying to manipulate the citizens of rival countries to destroy their civilization without having to fire a single bullet. Anonymous, over the internet mass manipulation, all for some minimal electricity cost.