So, when I code review, I have a super simple Cursor command that "orients" me in the PR:
* where does the change sit from a user perspective?
* what are the bookends of the scope?
* how big is the PR?
* etc.
Once I'm "in" and understand what it does, I pepper the AI with questions:
* Why did the author do this?
* I dont understand this?
* This looks funky, can you have a look?
* etc.
The more questions I ask, the more the AI will (essentially) go "oh, I didn't think of that, in fact, looks like the issue was way more serious than I first thought, let me investigate". The more I ask, the more issues AI finds, the more issues AI finds, the more issues I find. There's no shortcuts to quality control -- the human drives the process, AI is merely (and I hate to use this term but I will) a...force multiplier.