I guess that the point being made by GP is that most software are a high-dimensional model of a solution to some problem. With traditional coding, you gradually verify it while writing the code, going from simple to complex without loosing the plot. That's what Naur calls "The theory of programming", someone new to a project may take months until they internalize that knowledge (if they ever do).
Most LLM practices throw you in the role of that newbie. Verifying the solution in a short time is impossible. Because the human mind is not capable to grapple with that many factors at once. And if you want to do an in depth review, you will be basically doing traditional coding, but without typing and a lot of consternation when divergences arise.
> And for code specifically, we've had deterministic ways of doing that for 20 years or so.
And none of them are complete. Because all of them are based on hypotheses taken as axioms. Computation theory is very permissive and hardware is noisy and prone to interference.
I kinda like the analogy of travelling here.
With normal artisanal coding you take your time getting from A to B and you might find out alternate routes while you slowly make your way to the destination. There's also a clear cost in backtracking and trying an alternate route - you already wrote the "wrong" code and now it's useless. But you also gained more knowledge and maybe in a future trip from A to C or C to D you know that a side route like that is a bad idea.
Also because it's you, a human with experience, you know not to walk down ravines or hit walls at full speed.
With LLMs there's very little cost in backtracking. You're pretty much sending robots from A to B and checking if any of them make it every now and then.
The robots will jump down ravines and take useless side routes because they lac the lived in experience "common sense" of a human.
BUT what makes the route easier for both are linters, tests and other syntactic checks. If you manage to do a full-on Elmo style tunnel from A to B, it's impossible to miss no matter what kind of single-digit IQ bot you send down the tube at breakneck speed. Or just adding a few "don't walk down here, stay on the road" signs on the way,
Coincidentally the same process also makes the same route easier for inexperienced humans.
tl;dr If you have good specs and tests and force the LLM to never stop until the result matches both, you'll get a lot better results. And even if you don't use an AI, the very same tooling will make it easier for humans to create good quality code.