One of the core problems we have in software engineering is the longstanding philosophical problem around creation of cohesive, consistent, objective mental models of inherently subjective concepts like identifying a person, place, etc. Look at the endless lists of falsehoods programmers (tend to) believe about any topic.
You’re right that LLMs specifically have no guarantees about accuracy nor veracity of the text they generate but I posit that that’s the same with people, especially when filtered through the socialization process. The difference is in the kind of errors machines make compared to ones that humans make.
It’s frustrating we’re using anthropomorphic concepts like hallucinations when describing LLM behaviors when the fundamental units of computation and thus failures of computation are so different at every level.