> It's just that we can't show HOW they've been encoded in the model, means it's fair use?
Describing training as “encoding them in the model” doesn’t seem like an accurate description of what is happening. We know for certain that a typical copyrighted work that is trained on is not contained within the model. It’s simply not possible to represent the entirety of the training set within a model of that size in any meaningful way. There are also papers showing that memorisation plateaus at a reasonably low rate according to the size of the model. Training on more works doesn’t result in more memorisation, it results in more generalisation. So arguments based on the idea that those works are being copied into the model don’t seem to be founded in fact.
> I can't sell you a Harry potter book, but I can sell you some service that let's you generate it yourself?
That’s the reason why cases like this are doomed to fail: No model can output any of the Harry Potter books. Memorisation doesn’t happen at that scale. At best, they can output snippets. That’s clearly below the proportionality threshold for copyright to matter.
Copyright was build to protect the artist from unauthorized copy by a human not by a machine (a machine wildly beyond their imagination at the time I mean), so the input and output limitations of humans were absolutely taken into account when writing such laws, if LLMs were treated in similar fashion authors would have had a say in wether their works can be used as inputs in such models or if they forbid it.
> That’s clearly below the proportionality threshold for copyright to matter.
This type of reasoning keeps coming up with seemingly zero consideration for why copyright actually exists. The goal of copyright, under US law, is "To promote the progress of science and useful arts".
The goal of companies creating these LLMs is to supersede the use of source material they draw from, like books. You use an LLM because it has all the answers without having to spend the money compensating the original authors, or put in the work digesting it yourself, that's their entire value proposition.
Their end game is to create a product so good that nobody has a reason to ever buy a book again. A few hours after you publish your book, the LLM will gobble it up and distribute the insights contain within to all of their users for free, "it's fair use", they say. There won't be any economic incentive to write books at that point, and so "the progress of science and useful arts" will crawl to a halt. Copyright defeated.
If LLM companies are allowed to produce market substitutes of original works then the goal of copyright is being defeated on a technicality and this ought to be a discussion about whether copyright should be abolished completely, not a discussion about whether big tech should be allowed to get away with it.