So you have to be able to identify a priori what is and isn't an hallucination right?
I guess the real question is how often do you see the same class of hallucination ? For something where you're using an LLM agent/Workflow, and you're running it repeatedly, I could totally see this being worthwhile.
Yeah, reading the headline got me excited too. I thought they are going to propose some novel solution or use the recent research by OpenAI on reward function optimization.
The oracle problem is solved. Just use an actual oracle.