My experience is similar. AI's context is limited to the codebase. It has limited or no understanding of the broader architecture or business constraints, which adds to the noise and makes it harder to surface the issues that actually matter.
It also acts as mainly an advanced linter. The other day it pointed out some overall changes in a piece of code, but didn't catch that the whole thing was useless and could've been replaced with an "on conflict to update" in postgres.
Now, that could happen with a human reviewer as well. But it didn't catch the context of the change.
It also acts as mainly an advanced linter. The other day it pointed out some overall changes in a piece of code, but didn't catch that the whole thing was useless and could've been replaced with an "on conflict to update" in postgres.
Now, that could happen with a human reviewer as well. But it didn't catch the context of the change.