That a human would not ask such a question means it's not in the training set, so it shows how bad an LLM can be at thinking from first principles. Which, I think, is the point of such silly questions.