No? There's no model involved. It's all just probabilistic. LLMs understand what you're thinking as well as a mood ring.
The model is the thing which is learned in order to make the probabilistic prediction with low entropy.
The literal definition of a model is "an informative representation of an object, person, or system". I think you mean something else though, what are you trying to express exactly?
Nothing about an LLM is “just”. In what precise sense do you mean it is probabilistic?
It isn't possible to have "just probabilistic" (maybe a philosophical exception could be made for a uniform random distribution or whatever provides the little dose of randomness required to get nondeterministic results). Probabilities are always in context of a model. LLMs model language but language itself is a model of something else. My money would have been on language modelling nonsense, but that is quite clearly not the case. Turns out it models the world and so do LLMs.