logoalt Hacker News

lumostyesterday at 7:00 PM0 repliesview on HN

Also - how can this be prevented? the AI labs can't seriously expect that each lab will filter LLM generated content from their training sets based on the source model. Leakage of AI behavior into public datasets is inevitable.