Why though? They seem to know everything about everything, why not this specifically.
The problem with this line of reasoning is that it is unscientific. "They seem to" is not good enough for an operational understanding of how LLMs work. The whole point of training is to forget details in order to form general capability, so it is not surprising if they forget things about books if the system deemed other properties as more important to remember.
>> if they forget things about books if the system deemed other properties as more important to remember.
I will repeat for the 3rd time that it's not a problem with the system forgetting the details, quite the opposite.
>>The problem with this line of reasoning is that it is unscientific.
How do you scientifically figure out if the LLM knows something before actually asking the question, in case of a publicly accessible model like Gemini?
Just to be clear - I would be about 1000000x less upset if it just said "I don't know" or "I can't do that". But these models are fundamentally incapable of realizing their own limits, but that alone is forgivable - them literally ignoring instructions is not.