Because it's a statistical process generating one part of a word at a time. It probably isn't even generating "surprise". It might be generating "sur", then "prise" then "!"
We are also technically a statistical process generating one part of a word at a time when we speak. Our neurons form the same kind of vectorised connections LLMs do. We are the product of repeated experiences - the same way training works.
Our brains are more advanced, and we may not experience the world the same way, but I think we have clearly created rudimentary digital consciousness.
But what is surprise really? Something not following expectation. The distribution may statistically leverage surprise as a concept via how it has seen surprise as a concept e.g. "interesting!"
So it can be both true that it has nothing to do with the emotion of surprise, but appear as the emulation of that emotion since the training data matches the concept of surprise (mismatch between expectation and event).