But, well, how does it do the human-like-text-outputting exactly?
I’m guessing you aren’t just asking how an LLM works, but attempting to make the point that humans are also statistical next-token predictors or something?
Humans make predictions, that doesn’t mean that’s all we do.
I’m guessing you aren’t just asking how an LLM works, but attempting to make the point that humans are also statistical next-token predictors or something?
Humans make predictions, that doesn’t mean that’s all we do.