logoalt Hacker News

rvztoday at 2:40 AM2 repliesview on HN

Nope. You have not shown how a large scale collection of neural networks irrespective of their architecture is more deterministic when compared to a 'compiler' and only repeating a known misconception of tweaking the temperature to 0 which does not bring the determinism you claim it brings with LLMs [0] [1] [2], otherwise you would not have this problem in the first place.

By even doing that, the result of the outputs are useless anyway. So this really does not help your point at all. So therefore:

> You're just using words incorrectly. Deterministic means repeatable. That's it. Predictable, verifiable, etc are tangential to deterministic.

There is nothing deteministic or predictable about an LLM even when you compare it to a compiler, unless you can guarrantee that the individual neurons through inference give a predictable output which would be useful enough for being a drop-in compiler replacement.

[0] https://152334h.github.io/blog/non-determinism-in-gpt-4/

[1] https://arxiv.org/pdf/2506.09501

[2] https://thinkingmachines.ai/blog/defeating-nondeterminism-in...


Replies

hackinthebochstoday at 4:28 AM

Yes, there's some unknown sources of non-determinism when running production LLM architectures at full capacity. But that's completely irrelevant to the point. The core algorithm is deterministic. And you're still conflating deterministic and predictable. It's strange to have such disregard for the meaning of words and their correct usage.

show 1 reply