How can consciousness be possible without internal state? LLM inference is equivalent to repeatedly reading a giant look-up table (a pure function mapping a list of tokens to a set of token probabilities). Is the look-up table conscious merely by existing or does the act of reading it make it conscious? Does the format it's stored in make a difference?
What state is lacking? There is a result which requires computation to be output. The model is the state. The computation must be performed for each input to produce a given output. What are you even objecting to?
For all practical purposes, calling it a LUT is somewhat too reductive to be useful here I think. But we can try: leaving aside LLMs for a second; with this LUT reasoning model you're using, would you be able to prove the existence of just a computer?