I think the author is missing a key distinction.
Before, lines of code was (mis)used to try to measure individual developer productivity. And there was the collective realization that this fails, because good refactoring can reduce LoC, a better design may use less lines, etc.
But LoC never went away, for example, for estimating the overall level of complexity of a project. There's generally a valid distinction between an app that has 1K, 10K, 100K, or 1M lines of code.
Now, the author is describing LoC as a metric for determining the proportion of AI-generated code in a codebase. And just like estimating overall project complexity, there doesn't seem to be anything inherently problematic about this. It seems good to understand whether 5% or 50% of your code is written using AI, because that has gigantic implications for how the project is managed, particularly from a quality perspective.
Yes, as the author explains, if the AI code is more repetitive and needs refactoring, then the AI proportion will seem overly high in terms of how much functionality the AI proportion contributes. But at the same time, it's entirely accurate in terms of how this is possibly a larger surface for bugs, exploits, etc.
And when the author talks about big tech companies bragging about the high percentage of LoC being generated with AI... who cares? It's obviously just for press. I would assume (hope) that code review practices haven't changed inside of Microsoft or Google. The point is, I don't see these numbers as being "targets" in the way that LoC once were for individual developer productivity... there's more just a description of how useful these tools are becoming, and a vanity metric for companies signaling to investors that they're using new tools efficiently.
If tech companies want to show they have a high percentage of LoC being generated by AI, it's likely they are going to encourage developers to use AI to further increase these numbers, at which point is does become a measure of productivity.
> It seems good to understand whether 5% or 50% of your code is written using AI, because that has gigantic implications for how the project is managed, particularly from a quality perspective.
I'd say you're operating on a higher plane of thought than the majority in this industry right now. Because the majority view roughly appears to be "Need bigger number!", with very little thought, let alone deep thought, employed towards the whys or wherefores thereof.
I don't think the author is missing this distinction. It seems that you agree with him in his main point which is that companies bragging about LOCs generated by AI should be ignored by right-thinking people. It's just you buried that substantive agreement at the end of your "rebuttal".
> would assume (hope) that code review practices haven't changed inside of Microsoft or Google.
Google engineer perspective:
I'm actually thinking code reviews are one of the lowest hanging fruits for AI here. We have AI reviewers now in addition to the required human reviews, but it can do anything from be overly defensive at times to finding out variables are inconsistently named (helpful) to sometimes finding a pretty big footgun that might have otherwise been missed.
Even if it's not better than a huamn reviwer, the faster turnaround time for some small % of potential bugs is a big productivity boost.
> the overall level of complexity of a project
The overall level of complexity of a project is not an "up means good" kind of measure. If you can achieve the same amount of functionality, obtain the same user experience, and have the same reliability with less complexity, you should.
Accidental complexity, as defined by Brooks in No Silver Bullet, should be minimized.