logoalt Hacker News

alexsmirnovyesterday at 10:18 PM3 repliesview on HN

Considering how little data needed to poison llm https://www.anthropic.com/research/small-samples-poison , this is a way to replace SEO by llm product placement:

1. create several hundreds github repos with projects that use your product ( may be clones or AI generated )

2. create website with similar instructions, connect to hundred domains

3. generate reddit, facebook, X posts, wikipedia pages with the same information

Wait half a year ? until scrappers collect it and use to train new models

Profit...


Replies

homarpyesterday at 10:37 PM

https://www.bbc.com/future/article/20260218-i-hacked-chatgpt... says it took way less than half a year to 'pollute' a LLM

show 1 reply
nikcubyesterday at 10:37 PM

from my understanding Anthropic are now hiring a lot of experts in different who are writing content used to post-train models to make these decisions and they're constantly adjusted by the anthropic team themselves

this is why the stacks in the report and what cc suggests closely match latest developer "consensus"

your suggestion would degrade user experience and be noticed very quickly

show 2 replies