> A box of gas, left on its own for long enough, will engage in a pattern of collisions that in a certain interpretative framework correspond to an LLM forward pass.
That's a fun thought experiment. Greg Egan based a delightful science fiction novel on this premise. Permutation City, I believe.
To be clear, I don't necessarily think that current LLMs have subjective experiences. If I had to guess, I'd say "probably not." But:
- If I came from another universe, and if you asked me whether chemistry could have subjective experiences, I'd answer "probably not." And I would be wrong.
- Even if no current frontier models are "aware", it's possible that future models might be. Opus 4.6, for example, behaves far more like a coherent mind than last year's 3 billion parameter toy models. So future 100 trillion parameter models with different internal architectures might be even more like minds. (To be clear, I do not think we should build such models.)
- Awareness and intelligence might be different. Peter Watts' Blindsight is a fun exploration of this idea. Which leads me to conclude that it wouldn't necessarily matter whether an AI like SkyNet has subjective awareness or not. What matters is what kind of long-term plans it could pull off and how much it could reshape the world.
> Which leads me to conclude that it wouldn't necessarily matter whether an AI like SkyNet has subjective awareness or not. What matters is what kind of long-term plans it could pull off and how much it could reshape the world.
Absolutely. Thanks for the references :)