All of the latest models I've tried actually pass this test. What I found interesting was all of the success cases were similar to:
e.g. "Drive. Most car washes require the car to be present to wash,..."
Only most?!
They have an inability to have a strong "opinion" probably because their post training, and maybe the internet in general, prefer hedged answers....
> They have an inability to have a strong "opinion" probably
What opinion? It's evaluation function simply returned the word "Most" as being the most likely first word in similar sentences it was trained on. It's a perfect example showing how dangerous this tech could be in a scenario where the prompter is less competent in the domain they are looking an answer for. Let's not do the work of filling in the gaps for the snake oil salesmen of the "AI" industry by trying to explain its inherent weaknesses.
Did you try several times per model? In my experience it's luck of the draw. All the models I tried managed to get it wrong at least once.
The models that had access to search got ot right.But, then were just dealing with an indirect version of Google.
(And they got it right for the wrong reasons... I.e this is a known question designed to confuse LLMs)
They pass it because it went viral a week ago and has been patched
I guess it didn’t want to rule out the existence of ultra-powerful water jets that can wash a car in sniper mode.
I enjoyed the Deepseek response that said “If you walk there, you'll have to walk back anyway to drive the car to the wash.”
There’s a level of earnestness here that tickles my brain.
>Only most?!
There is such a thing as "mobile car wash" where they come to you, so "most" does seem appropriate.
I tried with Opus 4.6 Extended and it failed. LLMs are non deterministic so I'm guessing if I try a couple of times it might succeed.
Opus 4.6 answered with "Drive." Opus 4.6 in incognito mode (or whatever they call it) answered with "Walk."
Kind of like this: https://xkcd.com/1368/
And it is the kind of things a (cautious) human would say.
For example, that could be my reasoning: It sounds like a stupid question, but the guy looked serious, so maybe there are some types of car washes that don't require you to bring your car. Maybe you hand out the keys and they pick your car, wash it, and put it back to its parking spot while you are doing your groceries or something. I am going to say "most" just to be sure.
Of course, if I expected trick questions, I would have reacted accordingly, but LLMs are most likely trained to take everything at face value, as it is more useful this way. Usually, when people ask questions to LLMs they want an factual answer, not the LLM to be witty. Furthermore, LLMs are known to hallucinate very convincingly, and hedged answers may be a way to counteract this.
> Most car washes... I read it as slight-sarcasm answer
There are car wash services that will come to where your car is and wash it. It’s not wrong!
> Only most?!
What if AI developed sarcasm without us knowing… xD
There are mobile car washes that come to your house.
> Only most?!
I mean I can imagine a scenario where they have pipe of 50m which is readily available commercially?
Once I asked ChatGPT "it takes 9 months for a woman to make one baby. How long does it take 9 women to make one baby?". The response was "it takes 1 month".
I guess it gives the correct answer now. I also guess that these silly mistakes are patched and these patches compensate for the lack of a comprehensive world model.
These "trap" questions dont prove that the model is silly. They only prove that the user is a smartass. I asked the question about pregnancy only to to show a friend that his opinion that LLMs have phd level intelligence is naive and anthropomorphic. LLMs are great tools regardless of their ability to understand the physical reality. I don't expect my wrenches to solve puzzles or show emotions.
Here’s my take: boldness requires the risk of being wrong sometimes. If we decide being wrong is very bad (which I think we generally have agreed is the case for AIs) then we are discouraging strong opinions. We can’t have it both ways.