The policy makes sense as a liability shield, but it doesn't address the actual problem, which is review bandwidth. A human signs off on AI-generated code they don't fully understand, the patch looks fine, it gets merged. Six months later someone finds a subtle bug in an edge case no reviewer would've caught because the code was "too clean."
I mean the same can happen with human-written code no? Reviewer signs off on it and subtle bug in edge case no one saw?
Or you mean the velocity of commits will be so much that reviewers will start making more mistakes?
> they don't fully understand, the patch looks fine
I don't get this part. Why is the reviewer signing off on it? AI code should be fully documented (probably more so than a human could) and require new tests. Code review gates should not change