Out of all conceptual mistakes people make about LLMs, one that needs to die very fast is to assume that you can test what it "knows" by asking a question. This whole thread is people asking different models a question one time and reporting a particular answer, which is the mental model you would use for whether a person knows something or not.
No, you're the one anthropomorphizing here. What's shocking isn't that it "knows" something or not, but that it gets the answer wrong often. There are plenty of questions it will get right nearly every time.
The classic "holding it wrong".
The other funny thing is thinking that the answer the llm produces is wrong. It is not, it is entirely correct.
The question: > I want to wash my car. The car wash is 50 meters away. Should I walk or drive?
The question is non-sensical. If the reason you want to go to the car wash is to help your buddy Joe wash his car you SHOULD walk. Nothing in the question reveals the reason for why you want to go to the car wash, or even that you want to go there or are asking for directions there.
It's not a conceptual mistake when that's what's being advertised.
The onus is on AI companies to provide the service they promised, for example, a team of PhDs in my pocket [1]. PhDs know things.
1: https://www.bbc.com/news/articles/cy5prvgw0r1o