logoalt Hacker News

marliechillertoday at 9:01 AM8 repliesview on HN

Why do you think stringing words together is any more a sign of consciousness than google maps is when it tries to find the best route available to your destination? It seems to me that humans often fall into the trap of anthropomorphism. This is a theme thats touched upon in the novel "Blindsight" by Peter Watts. Just because something can communicate in a way that you can interpret, doesnt mean something is conscious


Replies

vidarhtoday at 10:34 AM

A large part of the problem is what you consider consciousness.

If you talk about having a subjective experience, then we don't know of any way to prove that even other humans than ourselves have one. We go entirely by assumptions based on physical similarity and our ability to communicate.

But we have no evidence that physical similarity is a prerequisite, nor that it is sufficient.

So the bigger trap is to assume that we know what causes a subjective experience, and what does not.

None of us even know if a subjective experience exists for more than a single entity.

But the second problem is that it is not clear at all whether that subjective experience in any way matters.

Unless our brains exceed the Turing computable, for which we have no evidence is even possible, either whatever causes the subjective experience is also within the Turing computable or it can not in any way influence our actions.

Ultimately we know very little about this, and we have very little basis for ruling out consciousness in computational systems, and the best and closest we have is whether or not they appear conscious when communicating with them.

show 1 reply
roxolotltoday at 1:20 PM

Yea a while a back I read an article which had a quote something like “what happened to weather prediction has happened to language.” Which is an oversimplification on both sides but if you think LLMs are conscious there’s good reason to think that GFS is too.

dumpsterdivertoday at 9:29 AM

> Just because something can communicate in a way that you can interpret, doesnt mean something is conscious

The phrase “the trap of anthropomorphism” betrays a rather dull premise: that consciousness is strictly defined by human experience, and no other experience. It refuses to examine the underlying substrate, at which point we’re not even talking the same language anymore when discussing consciousness.

show 1 reply
mseepgoodtoday at 9:12 AM

> It seems to me that humans often fall into the trap of anthropomorphism.

That's true, but they also often fall into the trap of exceptionalism.

energy123today at 9:27 AM

There are people who think Google Maps is a tiny bit conscious (the union of computational functionalists and panpsychists), to resolve the dilemma of some magical binary threshold.

show 1 reply
stavrostoday at 9:35 AM

Why do you think it's definitely not?

dTaltoday at 12:07 PM

I would caution against deriving too much of your philosophical worldview from a scifi book about posthuman vampires that has been deliberately engineered to make a philosophical point that is most certainly not a consensus.

For alternative viewpoints: Daniel Dennett considered philosophical zombies to be logically incoherent. Douglas Hofstadter similarly holds that "meaning" is just another word for isomorphism, and that a thing is a duck exactly to the extent that it walks and quacks like one. Alan Turing advocated empiricism when evaluating unknown intelligence. These are smart cookies.

threethirtytwotoday at 12:45 PM

Except we don’t know how those words are strung together. Right? Why don’t you analyze it a little further and stop shutting down your own brain before coming to this superficial conclusion.

You ask the LLM a complex question and it gives you a correct answer. Yes it has to string words together to answer your question but how did it know the order and which words to use in order to make the answer correct? You don’t actually know. No one does and it is in that unknown space that we suspect consciousness may lie. Something is there and humanity as a whole cannot understand it and this lack of understanding is exactly the same fundamental lack of understanding we have for how a monkey brain or dog brain or even human brain works. We do not know whether humans dogs or monkeys are conscious… you only assume other living beings are conscious because you yourself experience it and just assume it exists for others. We can’t even define what it is because consciousness is a loaded word like spirituality.

This is not anthropomorphism. You attribute the bias wrongly. Instead it is a stranger phenomenon among people like you who can mysteriously only characterize the LLM as a next token predictor and nothing else beyond that even though the token prediction clearly indicates greater intelligence at work.

The tldr is that we don’t actually know and that consciousness is a highly viable possibility given what we don’t know and given the assumptions of consciousness we have on other living beings with equivalent understanding of complex topics.