The author partially acknowledges this later on, but lines of code is actually quite of useful metric. The only mistake is that people have it flipped. Lines of code are bad, and you should target fewer lines of code (except at the expense of other considerations). I regularly track LoC, because if it goes up more than I predicted, I probably did something wrong.
> Bill Gates compared measuring programming progress by lines of code to measuring aircraft building progress by weight
Aircraft weight is also a very useful metric - aircraft weight is also bad. But we do measure this!
Fewer lines of code would result in the exact same gamified metric
LoC desire-ability is also dependent on projects stage.
Early we should see huge chunky contributions and bursts. Loc means things are being realized.
In a mature product shipping at a sustained and increasing velocity, seeing LoC decrease our grow glacially year-on-year is a warm fuzzy feeling.
By my estimation aircraft designs should grow a lot for a bit (from 0 to not 0), churn for a while, then aim for specified performance windows in periods of punctuated stability.
Reuse scenarios create some nice bubbles where LoC growth in highly validated frameworks/components is amazing, as surrounding systems obviate big chunks of themselves. Local explosions, global densification and refinement.
Author here; I think it can be a useful metric depending on the circumstance and use; the reason I decided to write that article is I'm starting to hear more and more of CTOs using as the sole metric from their team; I know of at least one instance where CTO is pushing for Agentic coding only and measure each dev based on LoC output.
There is also the x.com crowd that is bragging about their OpenClaw agents pushing 10k lines of code every day.
The problem with optimizing for less lines of code is the same as optimizing for unit tests: the less robust your code is, the better off you are.
Meaning, it's trivial to write unit tests when your code is stupid and only does happy path stuff and blows up on anything else. So we say "you need 90% coverage" or whatever, people will write stupid frail code that barely works in practice, but that is easy to unit test.
Similarly, if we say "do it with the least amount of code", we will also throw any hopes of robustness out the window, and only write stupid happy path code.
Everyone misses the technical goal of Google size AI.
Fill the gradient of machine states, then prune for correctness and utility.
That is not to say it's a good goal. But at the end of the day every program is electrical states in a machine. Fill machine, like search, see which ones are required to produce the most popular types of outputs, prune the rest.
Hint to syntax fans among programmers; most people will not be asking the machine to output Python or Elixir. Most will ask for movies, music, games. Bake the states needed to render and prune that geometry and color as needed. That geometry will include text shapes eventually too, enabling pruning away all the existing token systems like Unicode and ANSI. Storing state in strings is being deprecated.
Language is merely one user interface to reality. Grasp of it does not make one "more human" or in touch with the universe or yadda yadda. Such argument is pretentious attention seeking of those educated in a particular language. Look at them! ...recreating grammatically correct sentences per the rules of the language. Never before seen! Wow wow wow
Look at all the software written, all the books and themes within. Grasp of language these days is as novel an outcome as going to the grocery store, using a toilet.
Dijkstra’s quote from 1988 is even better: "My point today is that, if we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent": the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger."