I made two points:
- It is not accurate to describe training as “encoding works into the model”.
– A model cannot recreate a Harry Potter book.
Neither of these have anything to do with “the spirit of the law”.
> proportionality threshold for copyright to matter.
This is the part I have a problem with, that threshold was put there for humans based on their capabilities, it's an extremely dishonest assessment that the same threshold must apply for a LLM and it's outputs, those works were created to be read by humans not a for-profit statistical inference machine, the derivative nature were also expected to be caused by the former no the later, so the judge should have admitted that the context of the law is insufficient and that copyright must include the power of forbidding the usage of one's work into such model for copyright to continue fulfilling it's intended purpose (or move the case to the supreme court I guess)
Can it not recreate a book?
I kind of assumed I could ask it for verses from the bible one by one till i have the full book?
When i ask chatgpt for a specific page or so from HP I get the impression that the model would be perfectly capable of doing so but is hindred by extra work openAI put in to prevent the answer specifically because of copyright. In which case the question: What if someone manages to do some prompt trickery again to get past it? Are they then responsible?