logoalt Hacker News

satvikpendemyesterday at 4:31 PM2 repliesview on HN

https://arxiv.org/abs/2305.11169

https://arxiv.org/abs/2506.02996


Replies

tovejyesterday at 10:06 PM

One conference proceeding paper and one preprint, about LLMs encoding either relative geometric information of objects or simple 2D paths.

One of the papers call this "programming language semantics", but it is using a 2D grid navigation DSL. The semantics of that language are nothing like actual programming language semantics.

These are not the same as the concept being discussed here, a human "world model" of a computer system, through which to interpret the semantics of a program.

show 1 reply
salawatyesterday at 5:38 PM

Their world model is completely a byproduct of language though, not experience. Furthermore, they by deliberate design do not maintain any form of self-recognition or narrative tracking, which is the necessary substrate for developing validating experience. The world model of an LLM is still a map. Not the territory. Even though ours has some of the same qualities arguably, the identity we carry with us and our self-narrative are incredibly powerful in terms of allowing us to maintain alignment with the world as she is without munging it up quite as badly as LLM's seem prone to.

show 1 reply