You could argue that all the way down to machine code, but clearly at some point and in many cases, the abstraction in a language like Python and a heap of libraries is descriptive enough for you not to care what’s underneath.
The difference is that what those languages compile to is much much more stable than what is produced by running a spec through an LLM.
Python or a library might change the implementation of a sorting algorithm once in a few years. An LLM is likely to do it every time you regenerate the code.
It’s not just a matter of non-determinism either, but about how chaotic LLMs are. Compilers can produce different machine code with slightly different inputs, but it’s nothing compared to how wildly different LLM output is with very small differences in input. Adding a single word to your spec file can cause the final code to be unrecognizably different.
The difference is that what those languages compile to is much much more stable than what is produced by running a spec through an LLM.
Python or a library might change the implementation of a sorting algorithm once in a few years. An LLM is likely to do it every time you regenerate the code.
It’s not just a matter of non-determinism either, but about how chaotic LLMs are. Compilers can produce different machine code with slightly different inputs, but it’s nothing compared to how wildly different LLM output is with very small differences in input. Adding a single word to your spec file can cause the final code to be unrecognizably different.