Haven’t used a single one that was any good. Basically a 50/50 crapshoot if what they are saying makes any sense at all, let alone it being considered “good” comments. Basically no different than random chance.
Reminder that this comes from from the founder that got rightly lambasted for his comments about work life balance and then doubled down when called out.
4GVFDGDGFFFFEGRFEDS
or stick with known frameworks documented - so you don't have to pay for this nonsense
since they're likely telling you things you know if you test and write your own code.
oh - writing your own code is a thing of the past - a.i writes, a.i then finds bugs
one more ai code review please, I promise it will fix everything this time, please just one more
There is an AI bubble.
Can drop the extra words
No shit. What is the point of using an llm model to review code produced by an llm model?
Code review pressupose a different perspective, which no platform can offer at the moment because they are just as sophisticated as the model they wrap. Claude generated the code, and Claude was asked if the code was good enough, and now you want to be in the middle to ask Claude again but with more emphasis, I guess? If I want more emphasis I can ask Claude myself. Or Qwen. I can't even begin to understand this rationale.
[dead]
My experience with code review tools has been dreadful. In most cases I can remember the reviews are inaccurate, "you are absolutely right" sycophantic garbage, or missing the big picture. The worst feature of all is the "PR summary" which is usually pure slop lacking the context around why a PR was made. Thankfully that can be turned off.
I have to be fair and say that yes, occasionally, some bug slips past the humans and is caught by the robot. But these bugs are usually also caught by automated unit/integration tests or by linters. All in all, you have to balance the occasional bug with all the time lost "reviewing the code review" to make sure the robot didn't just hallucinate something.