No, they just need to be trained to have adversarial self review "thinking" processes.
You ask an LLM "What's wrong with your answer?" and you get pretty good results.
Or you get the original output result was perfect and the adversarial "rethinking" switches to an incorrect result.
Or you get the original output result was perfect and the adversarial "rethinking" switches to an incorrect result.