I'm all for AI programming.
But I've seen this conversation on HN already 100 times.
The answer they always give is that compilers are deterministic and therefore trustworthy in ways that LLMs are not.
I personally don't agree at all, in the sense I don't think that matters. I've run into compiler bugs, and more library bugs than I can count. The real world is just as messy as LLMs are, and you still need the same testing strategies to guard against errors. Development is always a slightly stochastic process of writing stuff that you eventually get to work on your machine, and then fixing all the bugs that get revealed once it starts running on other people's machines in the wild. LLMs don't write perfect code, and neither do you. Both require iteration and testing.
I just answered exactly that. I think that AI agents code better than humans and are the future.
But the parent argument is pretty bad, in my opinion.
I wrote much more here[0] and honestly I'm on the side of Dijkstra, and it doesn't matter if the LLM is deterministic or probabilistic
His argument has nothing to do with the deterministic systems[1] and all to do with the precision of the language. His argument comes down to "we invented symbolic languages for a good reason".[0] https://news.ycombinator.com/item?id=46928421
[1] If we want to be more pedantic we can actually codify his argument more simply by using some mathematical language, but even this will take some interpretation: natural language naturally imposes a one to many relationship when processing information.