LLMs are just math run on your CPU. Autocomplete. Sometimes very useful autocomplete, but still just autocomplete.
To imply it could be conscious requires something else, here the comment uses the phrase magic to fill that gap - since we must agree that a CPU is not conscious on it's own (else everything our computer does would be conscious).
A human brain is not conscious on its own.
Many things the human brain does don’t rise to the level of conscious awareness.
It remains to be seen whether a human brain can be conscious in a jar. If it can, then I’d still argue that some sub-unit of the whole brain is not conscious on its own, similarly a GPU running a GPT probably isn’t conscious, but there may be some scale of number of GPUs running software that might give rise to consciousness as an emergent ability.
GTP’s have exhibited emergent abilities as scale increased dramatically.
They stopped being autocomplete years ago with RLHF
It sound like you believe in magic then? What is this "something else" to consciousness that can't be done with sufficiently advanced math?
Neurons are just summing up their inputs according to the laws of chemistry. What's the difference?
https://en.wikipedia.org/?title=Emergent_behavior