Interesting: "Reviews are billed on token usage and generally average $15–25, scaling with PR size and complexity."
> Reviews are billed on token usage and generally average $15–25, scaling with PR size and complexity.
You've got to be completely insane to use AI coding tools at this point.
This is the subsidised cost to get users to use it, it could trivially end up ten times this amount. Plus, you've got the ultimate perverse incentive where the company that is selling you the model time to create the PRs is also selling you the review of the same PR.
Does AI review of AI generated code even make sense?
what are the implications for the tens of code review platforms that have recently raised on sky high valuations?
nice but why is this not a system prompt? what's the value add here?
> We've been running Code Review internally for months: on large PRs (over 1,000 lines changed), 84% get findings, averaging 7.5 issues. On small PRs under 50 lines, that drops to 31%, averaging 0.5 issues. Engineers largely agree with what it surfaces: less than 1% of findings are marked incorrect.
So the take would be that 84% heavily Claude driven PRs are riddled with ~7.5 issues worthy bugs.
Not a great ad of agent based development quality.