It makes no sense to walk. So the whole question makes no sense as there's no real choice. It seems that LLM assumes "good faith" from the user side and tries to model the situation where that question actually makes sense, producing answer from that situation.
I think that's a valid problem with LLMs. They should recognize nonsense questions and answer "wut?".
That's one of the biggest shortcomings of AI, they can't suss out when the entire premise of a prompt is inherently problematic or unusual. Guardrails are a band-aid fix as evidenced by the proliferation in jailbreaks. I think this is just fundamental to the technology. Grandma would never tell you her dying wish was that you learned how to assemble a bomb.