Even those are way more predictable than LLMs, given the same input. But more importantly, LLMs aren’t stateless across executions, which is a huge no-no.
Are you certain to predict the JIT generated machine code given the JVM bytecode?
Without taking anything else into account that the JIT uses on its decision tree?
> But more importantly, LLMs aren’t stateless across executions, which is a huge no-no.
They are, actually. A "fresh chat" with an LLM is non-deterministic but also stateless. Of course agentic workflows add memory, possibly RAG etc. but that memory is stored somewhere in plain English; you can just go and look at it. It may not be stateless but the state is fully known.