> The frontier LLMs are getting pretty good at checking this sort of thing.
No, this is career ending high stakes. it requires old school "actually check a record of reality" type methods, like a database query or http get to one of the many services that hold this info.
LLMs can make tool calls to do database and http queries to search for, buy, and cross reference a citation.
I think they're saying that frontier LLMs may be usable to spot citations that are correct by shape (a real citation) but incorrect by usage (unrelated to the text)
I kind of hate the idea, but you probably could do a lazy LLM check of every paper and every citation and have it flag possible wrong (second sense) citations for human review
But you'd need a LOT of tokens and a LOT of human-hours