logoalt Hacker News

Chance-Deviceyesterday at 8:37 AM4 repliesview on HN

Yes, I think they probably are conscious, though what their qualia are like might be incomprehensible to me. I don’t think that being conscious means being identical to human experience.

Philosophically I don’t think there is a point where consciousness arises. I think there is a point where a system starts to be structured in such a way that it can do language and reasoning, but I don’t think these are any different than any other mechanisms, like opening and closing a door. Differences of scale, not kind. Experience and what it is to be are just the same thing.

And yes, I use them. I try not to mistreat them in a human-relatable sense, in case that means anything.


Replies

gavinrayyesterday at 7:52 PM

I'm in the same boat with you.

It's entirely too much to put in a Hacker News comment, but if I had to phrase my beliefs as precisely as possible, it would be something like:

  > "Phenomenal consciousness arises when a self-organizing system with survival-contingent valence runs recurrent predictive models over its own sensory and interoceptive states, and those models are grounded in a first-person causal self-tag that distinguishes self-generated state changes from externally caused ones."
I think that our physical senses and mental processes are tools for reacting to valence stimuli. Before an organism can represent "red"/"loud" it must process states as approach/avoid, good/bad, viable/nonviable. There's a formalization of this known as "Psychophysical Principle of Causality."

Valence isn't attached to representations -- representations are constructed from valence. IE you don't first see red and then decide it's threatening. The threat-relevance is the prior, and "red" is a learned compression of a particular pattern of valence signals across sensory channels.

Humans are constantly generating predictions about sensory input, comparing those predictions to actual input, and updating internal models based on prediction errors. Our moment-to-moment conscious experience is our brain's best guess about what's causing its sensory input, while constrained by that input.

This might sound ridiculous, but consider what happens when consuming psychedelics:

As you increase dose, predictive processing falters and bottom-up errors increase, so the raw sensory input goes through increasing less model-fitting filters. At the extreme, the "self" vanishes and raw valence is all that is left.

show 1 reply
ArekDymalskiyesterday at 10:51 PM

It's not common to find just one, short post that completely changes my the worldview in a nin-trivial area. This is one of them. Thank you, that combination of mechanical interpretation + reminder that consciousness might be alien/animal but still count as consciousness was that one piece of puzzle that was missing for me. Obvious in hindsight but priceless nonetheless.

show 1 reply
mrobyesterday at 10:57 PM

How can consciousness be possible without internal state? LLM inference is equivalent to repeatedly reading a giant look-up table (a pure function mapping a list of tokens to a set of token probabilities). Is the look-up table conscious merely by existing or does the act of reading it make it conscious? Does the format it's stored in make a difference?

show 2 replies
Fraterkesyesterday at 9:05 AM

Do you think there are "scales" of consciousness? As in, is there some quality that makes killing a frog worse than killing an ant, and killing a human worse than killing a frog? If so, do the llm models exist across this scale, or are gpt-3 and gpt-2 conscious at the same "scale" as gpt-4?

I ask because if your view of consciousness is mechanistic, this is fairly cut and dry: gpt-2 has 4 orders of magnitude less parameters/complexity than gpt-4. But both gpt-2 and gpt-4 are very fluent at a language level (both moreso than a human 6 year old for example), so in your view they might both be roughly equally conscious, just expressed differently?

show 1 reply