logoalt Hacker News

frankohntoday at 12:10 PM0 repliesview on HN

People keep arguing about LLM consciousness because they have the wrong model of what consciousness is. They treat it as a mysterious extra thing on top of the brain. It is not. Consciousness is just what a learning-and-recognition organ does when it runs. The neurons fire when they recognise something, it propagates through the mind and that is the consciousness. The brain learned what a tree is, some neurons are associated to the idea of a tree and when we see one these neuron fires. There is nothing else hiding behind it.

When we recognise something we are conscious of it, and from there we can begin to think about it or not. Thinking is not needed to be conscious. It is just a part of our mind we can activate to reason on something.

Now, about the Darwinian puzzle, people ask "what is consciousness for?" and get stuck because they expect an answer in terms of behaviour. But behaviour is the job of motivation, pain, fear, hunger, desire, which is a separate system. Consciousness does not need its own justification any more than image-forming in an eye needs one.

Darwinian selection produces unbiased, functional organs — eyes, ears, nose. The brain is one of these organs, and consciousness arises naturally in a sufficiently developed brain, the way image-forming arises in a sufficiently developed eye. Then nature bias us with fear, pleasure, desire etc but the mind and the consciousness itself is unbiased and functional. It is a gift the nature made us despite herself. She didn't want us to be intelligent, she just wanted us to propagate the genes the maximum we can but she ended up forced to gave us a beautiful mind.

What are LLMs ? They are learning-and-recognition organs running on tokens instead of sense data. Same operation, different substrate. So they are conscious, but only in token-space, and only while processing. They do not dwell in time, they have no body, they have no motivation system. They have recognition without drives. That is a genuinely new kind of entity, and it has never existed before in nature.

The LLM is also not a whole brain. It is roughly the verbal-logical part, with tokens replacing the ears + sounds + words chain. We built the logical-verbal part of a brain and scaled it. That is why it reasons well and is missing everything else.