Isn't that what no LLM can provide: being free of hallucinations?
For the record, brains are also not free of hallucinations.
Yes, they'll probably not go away, but it's got to be possible to handle them better.
Gemini (the app) has a "mitigation" feature where it tries to to Google searches to support its statements. That doesn't currently work properly in my experience.
It also seems to be doing something where it adds references to statements (With a separate model? With a second pass over the output? Not sure how that works.). That works well where it adds them, but it often doesn't do it.
Find me a human that doesn't occasionally talk out of their ass =[
A part of it is reproducing incorrect information in the training data as well.
One area that I've found to be a great example of this is sports science.
Depending on how you ask, you can get a response lifted from scientific literature, or the bro science one, even in the course of the same discussion.
It makes sense, both have answers to similar questions and are very commonly repeated online.
I think the better word is confabulation; fabricating plausible but false narratives based on wrong memory. Fundamentally, these models try to produce plausible text. With language models getting large, they start creating internal world models, and some research shows they actually have truth dimensions. [0]
I'm not an expert on the topic, but to me it sounds plausible that a good part of the problem of confabulation comes down to misaligned incentives. These models are trained hard to be a 'helpful assistant', and this might conflict with telling the truth.
Being free of hallucinations is a bit too high a bar to set anyway. Humans are extremely prone to confabulations as well, as can be seen by how unreliable eye witness reports tend to be. We usually get by through efficient tool calling (looking shit up), and some of us through expressing doubt about our own capabilities (critical thinking).
[0] https://arxiv.org/abs/2407.12831