logoalt Hacker News

AI slop is killing online communities

681 pointsby thmyesterday at 6:46 PM587 commentsview on HN

Comments

CrzyLngPwdyesterday at 7:42 PM

I run a niche creative community, and we outlawed AI-generated content in 2022 as it was easy to see how corrosive it would be to the community.

It hasn't been easy. We ban fake AI accounts daily and shrug off around 600 AI content creator accounts monthly.

It's a lot of work, extra work that wasn't needed before AI content came around, and of course, that is an extra cost.

I fear losing the battle.

show 7 replies
AdminAccounttoday at 7:47 AM

I think the only reason stackoverflow still has any activity is because the community choose to ban AI content [1] and so did most of its other networks [2].

Perhaps it will even see a (small) resurgence when AI providers start charging for the actual costs.

[1] https://meta.stackoverflow.com/questions/421831

[2] https://meta.stackexchange.com/questions/384922

show 2 replies
agustechbroyesterday at 7:26 PM

I kind feel this might be good. Bot writen comments and AI media that can no longer be distinguish from real, will make us human leave the social networks, which helped to separate Us humans. Going back to the real world were you can trully believe on what you see, and enjoy the tone, look and scent of of our fellows humans beings.

show 7 replies
alaudetyesterday at 10:31 PM

The balance is so far out of whack with LLM's now in online communities. People crave human interaction with like minded individuals, and whoever figures out how to give authentic online experiences is going to be successful. Maybe small communities need to come back, where you build credibility slowly. Why does every site have to be a monstrosity that wants to build a hundred million users to IPO. It just attracts the worst. I was active on Reddit for years under the same username I have here. I have pretty much abandoned it.

show 3 replies
carlgreeneyesterday at 7:18 PM

I have largely written Reddit off and no longer visit it after an experiment I did where I had an agent karma farm for me and do some covert advertising. As I went through the posts it wrote I realized that as a reader I would have NO idea that these were just written by a computer. Many many people (or other bots) had full on conversations with it and it scared me a bit.

I am not quite there with Hacker News but I do know for a fact that many "users" here are LLMs.

Online communities are definitely dying. I guess I hope that maybe IRL communities have a resurgence in this wake.

show 47 replies
motbus3yesterday at 9:08 PM

The company I work for has a deep rooted community side and despite what big techs do, I am 100% confident the only aspects we have in community features are for the user benefit. No gray area. Just that.

Since the AI sloppification we lost considerable amount of traffic to bots. But worse than that, we lost users who tended to contribute back with others.

We can leverage multiple ways of exposing community data to members, so it is not that we are loss because of that, but more in the fact that we have 30y or so of good feedback on how the community around the platform was good for people and now everything is at risk...

Don't get me wrong, my work is work... There are premium features and else, but the amount of value one can get for free is what the platform is known for. And we know many people use it for free for years and when they need or can they subscribe and mostly stay for years and years.

The fact people are losing those connections is depressing to me

show 1 reply
culebron21yesterday at 8:49 PM

Sadly the imperative is, as often, a call to everyone to be good guy and make less noise. Unfortunately, it doesn't work, neither at personal level, nor at global.

One may be quiet, but what if your friend/acquaintance/fellow got possessed by some AI slot machine, and is sharing his "products" enthusiastically? I had such case, and right from the very beginning was dismissive and rude, and it doesn't work -- he keeps sharing various artifacts.

On a global level, yes communities die out. I think, global communication has reached the point when it's more a liability than a benefit. In late '90s and early '00s, maybe until early '10s, getting more connected could lead you to nice clients, getting hired etc. Nowadays, even before ChatGPT 3 in '22, every such area became overcrowded, underbidded, etc, and LLMs, surprisingly, added not much new -- just augmented this trend.

helsinkiandrewtoday at 9:33 AM

> But respect the community, and only share what is truly relevant. Save the crayon pictures for your kitchen fridge.

That highlights the problem - its not AI - it's the oversharing thats the issue. Many people have moved from "Sharing whats unusual/interested/excited me" to "What can I share today".

The constant stream of mediocrity drove me away from Facebook (years ago) and then Instagram.

Aeroiyesterday at 7:21 PM

You're absolutely right!

show 2 replies
dwa3592yesterday at 7:40 PM

When LLMs were new on the scene, I thought trust would fade in the written(text) medium. I saw it happening on Substack, Medium, and Reddit. But then VCs pumped so much money and AI has gotten into every other modality (audio, video). The only thing I really interact these days are the human beings sitting in front me, phone calls with people I know and hackernews. Life seems sorted but something feels missing as well.

Edit - I am not anti AI but it is slowly killing the digital human interaction.

Groxxtoday at 1:52 AM

Giant online communities, yes. Small ones seem totally unaffected afaict - some harder to spot scam/spam accounts, but they're outed as soon as they act. And any invitation-based thing should almost perfectly block those.

Smaller communities are generally a lot healthier anyway, so tbh I don't think this is all that bad of a thing. I don't think it's possible to be open to millions and also be healthy, unless you spend a lot of money paying moderators (and regularly rotating them, to prevent burn-out or mental harm from too much exposure, which ~0 do in an even slightly ethical way).

noahgolmantyesterday at 7:38 PM

There has to be room for an AI-driven project that expresses a unique idea, even if there's no community around it yet. Someone has to express it, and from now on that idea will largely be implemented with AI.

> A good use of AI is when it enables people to do something they couldn’t do before, to contribute to a community when they couldn’t before.

I agree 100% with the novel contribution aspect. But there's some nuance there.

For example a project might have no active contributors. It might not be something you can drop directly into your codebase. Neither of those is inherently bad.

As AI becomes more responsible for higher-level planning decisions, the value of an OSS project becomes less tied to visible community activity like PRs and issues.

I notice this in my own work a lot. I might not use that project's code directly. But I think about a problem differently as a result. I often point my agent to existing OSS projects as inspiration on how to solve a problem. The project provides indirect value by supporting architectural decisions, deployment approaches etc. Unfortunately OSS activity doesn't capture this.

show 2 replies
teknovertoday at 12:37 AM

I have pondered the sensibility of using AI to support the initial birth of new communities. Given the needed social validation of seeing both 1. A populated community and 2. The tone of the community being grounded, non-toxic and useful.

The alternative is having a community born that will be small, have early adopters who can be overly passionate or critical and gatekeep folks from discussion. That means high effort to curate initially.

TeaVMFantoday at 5:40 AM

This bothered me so much that in my tool for HTML-native authors, EPublish ( https://frequal.com/epublish/ ), I automatically insert a no-AI-training clause on the copyright page. Not that it will stop the kind of executives who will authorize mass unauthorized downloading of books to train their LLMs, but we have to at least take a stand.

throw7yesterday at 8:47 PM

"Build with AI."

No, I don't think I will.

janice1999yesterday at 8:06 PM

Question for web devs - are captchas effective any more? If Reddit required a captcha on every comment, would it actually decrease bot comments?

show 2 replies
lizknopetoday at 12:14 AM

I was on Usenet starting in 1991. Once the Internet got popular with the general public around 1995 things started going downhill. Spam overwhelmed Usenet in the late 1990's and made it almost unusable for general discussion.

Stuff started moving to web site forums which I still don't think are as good as a Usenet newsreader. slrn was my favorite.

Then reddit came along and a lot of online forums started dying as people moved to reddit.

Just this morning on reddit I reported 4 separate posts as AI slop to the moderators. They need to change the categories as I flag it as "disruptive use of bots"

For 2 of the posts the moderators agreed with me and about 5 hours later the posts were removed. For the other 2 the moderators haven't done anything.

It's a losing battle.

Some of the posts start by asking questions like "I was thinking about this and... [long rambling paragraphs] Your thoughts on this?"

I waste a minute reading then another minute skimming the rest of it and then realize I wasted 2 minutes of my life. Then another 30 seconds reporting it to the mods.

This has exploded in the last 6 months.

Then there are all the repost bots farming for karma. Some subs have a rule that you can't repost something in the last 30 days or 6 months. But it is really ridiculous when something get 500 upvotes and then literally the next day a bot reposts the same thing and it still gets 300 upvotes. I think it is just a bot farm upvoting stuff.

show 1 reply
dwaltripyesterday at 9:32 PM

Like many modern woes, it’s a problem of trust.

The baseline level of trust in an online interaction has been eroded significantly by LLMs.

The question is, how can we reverse this trend and increase trust?

I have a sneaking suspicion that it would help enormously if the stock prices of the largest companies in the world were not tied to how effective they are at hijacking as much of humanity’s time and attention as possible.

Maybe the fediverse can (eventually) help? It’s been a while since I looked at it.

Let’s empower people to effectively have more control over the content they interact with.

Social dynamics can make this difficult. We all want to be in the loop. The recent striking successes of the movement to ban phones in schools gives me hope.

show 2 replies
builtbysteftoday at 12:25 AM

The “slopification” of the internet has been happening for years now but I honestly don’t know what a real solution would look like.

Most people aren’t willing to go through a identity verification process, or pay to join a community, and invitation-only spaces would probably lose diversity of thought pretty quickly.

Even still, I guess one of the above is a lesser evil because the bot problem is only going to become more unbearable.

P.S. Props to the author. I really liked this writing style.

deferredgrantyesterday at 10:55 PM

The sad part is that the cost gets pushed onto the good participants. Once enough replies feel synthetic, real people spend more energy deciding whether the conversation is worth joining.

ionwaketoday at 9:42 AM

Also I’ve noted this odd behaviour imo where if I mention one of my comments is AI - as in “ this is what the AI says about the” because it’s a concise statement to aid the chat - I get severely downvoted. But if I just make my comment basically a human parsed version of the AI comment I get upvotes - with no concern for granularity of source integrity. Which is terrible in two ways.

Galanweyesterday at 11:28 PM

At this point, I see no identity verification or proof of some kind of humanity working.

I think what we need is the equivalent of what was done for CORS: client/server cooperation.

That is, APIs should mark that they are human only, and harnesses should cooperate with such flags and prevent calling said APIs.

It's not perfect, as it's client side enforcement, and one could still theorically build their own harnesses without, but that's the only way forward.

olupyesterday at 8:19 PM

I feel that a lot in my side projects: maybe one should keep the half-baked AI repo for oneself and rather share what the experiment, the thesis, and the learning from the building are. No one cares much about the (un)finished product, as it can be replicated better in most cases with a couple hours' work of claude coding.

For instance, I really liked how Karpathy shared a high-level idea on the LLM-based wiki. It was sadly followed by a long tail of no-one-cares-about "Here is my LLM wiki product" posts pointing to the generic LLM-generated landing page.

cody_ellinghamtoday at 3:34 AM

What if you charged people to post to change the incentives? Something like https://stacker.news/

pupppetyesterday at 8:13 PM

I want my future community apps and sites to build in bot a flagger. I don't care how hard it is, the community that gets this right is the one I'll jump ship to.

show 1 reply
rad_valtoday at 9:22 AM

AI slop complaining about AI slop. Many of these Reddit communities were trash way before AI. Hidden self promotion was everywhere. These people would like a platform to promote their shit, but they turn violent when others do. This guy literally wrote this with Claude complaining about others sharing things they created with Claude.

sunirtoday at 2:18 AM

A few things. A web of trust of some kind like vouching may come back, and general algorithmic silencing of low quality members. Also most governments are going towards the South Korean model of government-verified ID to post online to keep teenagers off social media. The same tool can be used to greatly reduce spam and slop, if that's what platforms want.

Also people will get used to AI in online spaces as AI quality improves. If I'm online trying to get help for some task, I personally don't care who wrote what if it is correct; it's not like humans have great track records of accuracy or substantial contributions either on average. Correctness is expensive in general.

If I'm online trying to relate to other humans emotionally, well I get what I'm paying for. It's been true forever that the better the gate, the better the community. I've tried to push the boundaries of openness, but as I've written extensively on MeatballWiki, soft security depends on there being more good than bad apples in a community. With machine intelligence, the economics of that are silly.

Regardless, people love people, so we'll figure it out. I'm optimistic we can rise to this challenge.

ramon156today at 9:07 AM

Instagram comments genuinely make me angry.

It used to be because the comments lacked any critical thinking. This is probably due to the fact that most people on instagram are teenagers. That's fine, and for that reason I stopped reading comments.

But now it's pretty obvious that the comments are LLMs talking. Whether a human initiated it, no idea, but the big walls of text done by bobbyfoo2012 seems highly unlikely.

Aeroitoday at 1:58 AM

1. human verification for auth.

2. only human generated input composer, no copy/paste, no file uploads ect. control the composer. control the camera sessions for photos videos.

3. no algorithmic feed that is designed for ad-spend and eyeballs.

4. moderate

show 1 reply
bitvviptoday at 2:04 AM

Entering the AI era, it's hard to tell the authenticity of things on the Internet. But sometimes, having a conversation with AI is not a big deal as long as we can gain something from it

Hobadeetoday at 2:19 AM

> I built a homepage on Geocities, complete with...a web counter

Yes, but how many decimal places did you optimistically give it, only to never use more than the "10s" place?

OgsyedIEyesterday at 7:41 PM

It sucks that the narrative framing device of 'human slop' has vanished in the last year. Some subreddits, like all location subreddits, lifestyle subreddits like malefashionadvice and redscarepod and entry-level academic subreddits like math and criticaltheory were already just hives of human slop before AI came around because of a structural design to the site that had the side effect of normalising a total absence of quality control.

Upvotes are not a good mechanism for quality control in any way because they force good content to have the same metadata as the content that is technically well-constructed but is irrelevant, meaningless, just a platitude, too obvious to be obvious or pablum. Upvotes turn everything into a shock-value dominated 101 space.

tailscaler2026yesterday at 7:46 PM

Online communities that allow upvoting / downvoting have been effectively dead for a long time because it's easy to manipulate conversations by elevating and punishing comments to fit a narrative. This is especially true on HN.

show 2 replies
doginasuittoday at 2:28 AM

The important thing to recognize is that quality of content has never been the driver of online communities. As long as they provide an engaging break from real life, they will exist and thrive. I think the negative association with LLMs is a phenomenon that will die out in the 20s. Our understanding of authenticity will evolve and so will the tools and platforms. The internet has always been extremely artificial, that won't change very much.

CM30yesterday at 7:49 PM

There's a lot of focus on tech projects here, but it's not just vibe written projects that are ruining communities now.

No, it's a problem with art, text and videos too. Reddit was already becoming a creative writing exercise in many ways, with infamous subs like 'Am I the Asshole?' seemingly being about 80% fiction labelled as fact. But now you don't even need to know how to write to flood the site with useless 'content'.

YouTube is arguably even worse, since AI led content farms are not just spamming the hell out of every topic under the sun, but giving outright dangerous advice and misinformation on top of that. I saw this video about medical misinformation by these 'creators' earlier, and it genuinely made me want to see them crack down on this junk:

https://www.youtube.com/watch?v=UEfCTCBDKIU

And there's just this feeling of distrust everywhere too. Is anyone on Hacker News human anymore? Is that Reddit poster I'm responding to human? Are the folks on Twitter, Threads or Bluesky human?

The scary part is that you basically can't tell anymore. Any project you find could be AI generated slop, any account could be a bot using stolen images or deepfakes, any article or video could be blatant misinformation put together as a cash grab...

If something doesn't improve, pretty much every platform under the sun is going to be completely useless, as is a lot of the internet as a whole.

show 1 reply
yamanakatakeshitoday at 1:14 AM

I feel the root of the problem is that Google and major platforms defined "correctness" as "high impressions" and "high engagement." This created a game where AI-generated "slop" becomes the ultimate winner. For those of us trying to create or find constructive, deeply-thought-out content, the situation is becoming increasingly dire.

It is exhausting to see a single, sincere sentence based on genuine human experience buried under 1,000 pages of SEO-optimized, AI-generated "void" that Google deems "correct." Despite this, I will keep working on filtering through the noise today.

geoffdouglasyesterday at 7:28 PM

This is a good thing. social media was already slop before AI. If this gets more intellectuals off these same websites daily and instead spend their time to better things, then I love AI slop’s purpose. There’s more to the internet than Reddit, TikTok, and youtube. Really there is, if your circle of friends is small or non existent without going to the same dotcoms, you have an issue that is worse than any AI slop tbh

show 1 reply
liminisyesterday at 8:18 PM

I'll remove the particulars to avoid anything partisan, but:

I failed to truly appreciate how cooked reddit was with bots until I accidentally clicked Popular and stumbled upon a national subreddit post with a 'chad meme', starring a particular political leader, whose unpopularity is hard to adequately convey to foreigners.

It was not just that this post had been so severely upvoted, but the comment section itself had a mantra more or less, with very little actual conversation, just echoing the same sentiment; and all those comments in turn upvoted to the point of drowning out the lone comments at the bottom (not downvoted, just not upvoted) expressing "???". I don't know if I'd ever even written the word 'astroturfing' before expressing my bafflement at a friend, so I don't think I'm very tinfoil hat about these things.

It was just utterly bizarre to see someone who can barely get a single win in public discourse being heralded -- monotonously -- like he was the second coming.

show 2 replies
retinarosyesterday at 9:35 PM

AI is lifting the voices of the lazys and below average to average people. for those who would never have progressed it might seem like a god given gift. for the ones with the desire to grow and learn and go beyond average... this is a curse.

show 1 reply
kmfrkyesterday at 10:12 PM

I wonder when things get so bad that we end up filtering content made by accounts created before the release of ChatGPT.

show 2 replies
dharmatechtoday at 12:31 AM

I've been messing around with a decentralized social network where you only see who you choose to follow.

It's implemented for plan9, but clients could be made for any OS:

https://youtube.com/watch?v=q6qVnlCjcAI

RF_Enthusiasttoday at 4:43 AM

Maybe Friendster has the right idea.

gos9today at 1:58 AM

Online communities died when they were monetized and open

ianbutleryesterday at 7:19 PM

I made this point elsewhere, but people are learning a lot of what us had to learn the old way which is no one cares about your stuff for the most part and now the value provided has to go way up to get people to care. That is, as the author says, the novelty has worn off and since we know it's AI the perceived value is also way down.

We're all recalibrating.

I do really think this is just a quick period in time before most people realize that the slop posting doesn't help them personally get anything and most give up and we go back to roughly the ratio of cool things with real value to see but like on a bigger scale because AI helps you do more as one person.

show 1 reply
originalvichyyesterday at 9:15 PM

Re: "The Asymmetry of Bullshit"

I'm gonna speak on behalf of language models' capability of making online communities better. In recent times, the frustrating forum phenomenon of "learned helplessness" is making me too annoyed to participate. Even in a fantastic subreddit as /r/LocalLLaMA, there are people posting replies in the vein of

> user1: please help me understand this acronym the post title speaks of > user2: (explains in detail what it means)

In the "good old days", a low effort, surface level question would result in someone either muting or banning the person to keep the discussion high quality.

There I am, browsing a forum dedicated to LLM enthusiasts, and an unbeliavable number of people are asking LMGTFY/RTFM-level questions they could even find an answer to from a free Google Search AI summary, and people are rewarding them by actually responding to them with effort.

Thanks to models being quite intelligent at answering basics, the ban-hammer should be used more swiftly if people keep polluting forums with low-quality posts. There's no need to feel bad for them not having the time or capabilities to read through years of forum posts to feel qualified to answer.

Maybe even these sloppy posts authors can be outright muted or banned with a heavier hand for the sake of quality.

show 1 reply
mrkrameryesterday at 7:26 PM

The importance of good search engines and good discovery engines will grow even more.

show 2 replies
spiderfarmertoday at 7:39 AM

My communities hate all things AI. So AI content just doesn't survive.

foxfiredyesterday at 7:41 PM

For every argument against AI slop, you will get a variation of it's the future, or I'm 10x more productive now, I've shipped 3 applications in 2 days, etc.

They won't stop talking about it and defending it. But I can't get anyone to share their amazing work with me.

There is a reason the Show HN projects that are mostly vibecoded don't get much response. It's because they aren't any good. Comments that are AI generated are hollow. Videos that AI generated a shell of their sources.

show 2 replies
erpellanyesterday at 10:20 PM

Invert the economics. Right now there is value in posting LLM generated content that is more than the cost of using the model.

If platforms had a subscription model that you had to pay for in order to do more than just read comments, there’d be a lot less LLM content. There would also be a lot less of all content. But maybe that’s the price you pay (literally) to get rid of AI slop.

show 1 reply

🔗 View 38 more comments