In addition to the elsewhere-mentioned "you're using a black box to try to analyze the same black box," the fundamental metrics all seem incredibly prone to other factors than any Claude Code changes.
Claude Code changes all the time—it's the whole shitty trend of the day—but you can't tell which of those changes are better or worse from analyzing results on independent novel tasks.
And you're baking in certain conclusions: "HOLDING / SUSPECTED REGRESSION / CONFIRMED REGRESSION / INCONCLUSIVE". Where's an option for "better than previous baseline"? Seems certainly possible that a session could have better-than-average numbers on the measured things.
Overall, though, there's just so much here that's just uncontrolled. The most obvious thing that isn't controlled for is the work itself. What does the typical software project look like? A continued accumulation of more code performing more features? What's gonna make an LLM-based agent have to do more work? Having to deal with a larger, more complicated codebase. Nothing in this seems to attempt to deal with the possibility that a session that got labeled a regression might have actually been scored even lower against a month ago's Claude Code.
"It's harder to read code than to write code" and "codebases take more effort to modify over time as they grow" are ancient observations.
Drift detection would require static targets and frequent re-attempts.
I use it everyday and haven't seen worsening. (It's definitely not static but the general trend has been good.) But I use it on a codebase that was already very complex before we started using these tools, where overall every three months or so has brought significant improvements in usability and accuracy.