"The study authors took 36 books and divided each of them into overlapping 100-token passages. Using the first 50 tokens as a prompt, they calculated the probability that the next 50 tokens would be identical to the original passage. They counted a passage as “memorized” if the model had a greater than 50 percent chance of reproducing it word for word."
So they fed "It takes a great deal of bravery to stand up to our " and the llm responded "enemies, but just as much to stand up to our friends".
They repeated that for every 100 tokens of the entire book. I think lots of fans could do just as well. It's pretty good evidence that the potter books were in the training corpus, but it's not quite what people think when they say an llm has 'memorized' something. It's not like getting even a few pages out of the model.
"The study authors took 36 books and divided each of them into overlapping 100-token passages. Using the first 50 tokens as a prompt, they calculated the probability that the next 50 tokens would be identical to the original passage. They counted a passage as “memorized” if the model had a greater than 50 percent chance of reproducing it word for word."
So they fed "It takes a great deal of bravery to stand up to our " and the llm responded "enemies, but just as much to stand up to our friends".
They repeated that for every 100 tokens of the entire book. I think lots of fans could do just as well. It's pretty good evidence that the potter books were in the training corpus, but it's not quite what people think when they say an llm has 'memorized' something. It's not like getting even a few pages out of the model.