I've never seen anyone here claim that AI never hallucinates or can't provide incorrect information.
I've absolutely seen commenters who claim that hallucinations are a thing of the past if you use the newest models. They're wrong, but they exist.
I've absolutely seen commenters who claim that hallucinations are a thing of the past if you use the newest models. They're wrong, but they exist.