Yup. Not my problem.
You could even say it strongly would very strongly incentivize the LLM companies to be on their best behavior, otherwise people would start revoking consent en-masse and they'd have to keep training new models all the time.
If you want something more realistic, there would probably be time limits how long they have to comply and how much they have to compensate the authors for the time it took them to comply.
There absolutely are ways to make it work in mutually beneficial ways, there's just no political will because of the current hype and because companies have learned they can get away with anything (including murder BTW).
> Yup. Not my problem.
And that is why the entire industry is going to roll their eyes and ignore you.
No law is putting this genie back in the bottle, so all there is left to do is adapt and push for models with open training data like those by Ai2.