That's a serious issue: How could retractions work with LLMs? How could they be made to work?
Accuracy rots over time, and at varying rates. It's not just scientific research.