logoalt Hacker News

qarltoday at 1:10 AM0 repliesview on HN

But that's about the output, not the training. We agree: outputs that supplant the original are the problem. A model constrained to produce only fair use outputs causes no such harm — regardless of what it was trained on.