logoalt Hacker News

lo_zamoyskiyesterday at 5:42 PM0 repliesview on HN

Wirth's complaint makes sense when the utility of software remains constant (or degraded) in relation to rising costs of providing that utility. But we must clarify what "cost" means here (and the relevant costs are all ultimately economic costs), because this can explain the contours of why software is written the way it is. We also should clarify "utility".

Economically, something like memory is in the vicinity of 10 orders of magnitude cheaper today relative to 1970 (a). Similar things can be said about processors. This means the incentive to invest costly engineering resources (b) into optimizing software is very low. In terms of energy, a CPU instruction is at least millions of times more energy efficient today (c). That's another big economic disincentive. Furthermore, time spent optimizing is time not spent doing product development (d). A slower product on the market can be better than late market entry.

So we have production costs of hardware (a), production costs of software (as a function of time)(b), energy costs of hardware (c), energy cost of running software (c), and opportunity cost of late market entry (d). There's also the time cost of running software (e).

(a) is cheaper;

(b) depends on your measurement of utility;

(c) is cheaper;

(d) means unoptimized software tends to be cheaper;

(e) depends on your measurement of utility;

So (b) and (e) are where Wirthian arguments can focus.

However, AI may yet play a major role in optimizing software. They are already being used in this space.[0]

W.r.t. complexity, one consequence of abstraction is that it further decouples the cost of an operation from the difficulty of implementation. Of course, the two were never identical to begin with. It is easier to implement bubble sort than quick sort, easier still to come up with it when you have no knowledge of sorting algorithms. But greater abstraction is better at concealing computational complexity. The example involving ORMs is a good one. When you have to write SQL by hand, you have a clearer picture of the database operations that will be performed, because the correspondence between the operations and what the database is doing is tighter. ORMs, on the other hand, create an abstraction over SQL that is divorced from the database. Unless the ORM is written in some crafty way that can smartly optimize the generated SQL (and optimizers have their limitations), you can land yourself in exactly the situation the author describes.

W.r.t. learning from LLMs, that is perhaps the better application in many cases, as a kind of sophisticated search engine. The trouble is that people treat LLMs as infallible oracles. Another issue is that people seem not to care about becoming better themselves. You see this with thought experiments where we posit some AI that can do all the thinking and working for us. Many if not most people react as if this makes human beings "obsolete"...which is such a patently absurd and frankly horrifying and obscene notion that it can only be an indictment of our consumerist culture. Obsolete with respect to what? A human life is not defined by economic utility. Human purpose is not instrumental. Even if an AI understood philosophy, science, etc., if I don't understand them, then I don't understand them. I am no better for it when someone or some fictional AI does. I am made no wiser.

[0] https://arxiv.org/abs/2503.15669