But how do you know it's cut to spec if you don't measure it?
Maybe someone bumped the fence aw while you were on a break, or the vibration of it caused the jig to get a bit out of alignment.
The basic point is that whether a human or some kind of automated process, probabilistic or not, is producing something you still need to check the result. And for code specifically, we've had deterministic ways of doing that for 20 years or so.
I guess that the point being made by GP is that most software are a high-dimensional model of a solution to some problem. With traditional coding, you gradually verify it while writing the code, going from simple to complex without loosing the plot. That's what Naur calls "The theory of programming", someone new to a project may take months until they internalize that knowledge (if they ever do).
Most LLM practices throw you in the role of that newbie. Verifying the solution in a short time is impossible. Because the human mind is not capable to grapple with that many factors at once. And if you want to do an in depth review, you will be basically doing traditional coding, but without typing and a lot of consternation when divergences arise.
> And for code specifically, we've had deterministic ways of doing that for 20 years or so.
And none of them are complete. Because all of them are based on hypotheses taken as axioms. Computation theory is very permissive and hardware is noisy and prone to interference.
> And for code specifically, we've had deterministic ways of doing that for 20 years or so.
And those ways all suck!
It's extremely difficult to verify your way to high quality code. At lower amounts of verification it's not good enough. At higher amounts the verification takes so much longer than writing the code that you'll probably get better results cutting off part of the verification time and using it to write the code you're now an expert on.