logoalt Hacker News

dnauticstoday at 3:46 AM0 repliesview on HN

> Training data can't be the whole answer.

Absolutely correct. Anthropic showed that 250 examples can "poison" an LLM -- independent of LLM activation count.