logoalt Hacker News

tananantoday at 12:31 PM1 replyview on HN

In discussions like this, we're always going to bottom out at certain assumptions we bring with us, so I agree.

One reason I like bringing up examples like this (the xkcd in sister reply is also good) is that it makes really visible what our assumptions are. The scales are big both in space and time in order to emphasize what weight is given to functional equivalence.

I feel pretty confident most people wouldn't presume that doing a bunch of math by hand on paper can create glacial ephiphenomenal experiences (though I like the term).

Another thing that's interesting to me is that the converse assumption, i.e. one with a strong allegiance to functionalism, ends up feeling far more idealistic than you might expect. A box of gas, left on its own for long enough, will engage in a pattern of collisions that in a certain interpretative framework correspond to an LLM forward pass. In another, it can be a game of minesweeper.

The individual particles of course, couldn't care less whether you see them as part of one or the other. Yet your ability to see them in light of the first one is perhaps enough for the lights to truly turn on, if transiently, in some mind somewhere.


Replies

ekiddtoday at 1:14 PM

> A box of gas, left on its own for long enough, will engage in a pattern of collisions that in a certain interpretative framework correspond to an LLM forward pass.

That's a fun thought experiment. Greg Egan based a delightful science fiction novel on this premise. Permutation City, I believe.

To be clear, I don't necessarily think that current LLMs have subjective experiences. If I had to guess, I'd say "probably not." But:

- If I came from another universe, and if you asked me whether chemistry could have subjective experiences, I'd answer "probably not." And I would be wrong.

- Even if no current frontier models are "aware", it's possible that future models might be. Opus 4.6, for example, behaves far more like a coherent mind than last year's 3 billion parameter toy models. So future 100 trillion parameter models with different internal architectures might be even more like minds. (To be clear, I do not think we should build such models.)

- Awareness and intelligence might be different. Peter Watts' Blindsight is a fun exploration of this idea. Which leads me to conclude that it wouldn't necessarily matter whether an AI like SkyNet has subjective awareness or not. What matters is what kind of long-term plans it could pull off and how much it could reshape the world.

show 1 reply