logoalt Hacker News

gambitingyesterday at 8:50 PM0 repliesview on HN

>> if they forget things about books if the system deemed other properties as more important to remember.

I will repeat for the 3rd time that it's not a problem with the system forgetting the details, quite the opposite.

>>The problem with this line of reasoning is that it is unscientific.

How do you scientifically figure out if the LLM knows something before actually asking the question, in case of a publicly accessible model like Gemini?

Just to be clear - I would be about 1000000x less upset if it just said "I don't know" or "I can't do that". But these models are fundamentally incapable of realizing their own limits, but that alone is forgivable - them literally ignoring instructions is not.