logoalt Hacker News

simianwordsyesterday at 8:01 PM2 repliesview on HN

how would you empirically disprove that it doesn't have understanding?

i can prove that it does have understanding because it behaves exactly like a human with understanding does. if i ask it to solve an integral and ask it questions about it - it replies exactly as if it has understood.

give me a specific example so that we can stress test this argument.

for example: what if we come up with a new board game with a completely new set of rules and see if it can reason about it and beat humans (or come close)?


Replies

bigstrat2003today at 12:05 AM

> how would you empirically disprove that it doesn't have understanding?

The complete failure of Claude to play Pokemon, something a small child can do with zero prior instruction. The "how many r's are in strawberry" question. The "should I drive or walk to the car wash" question. The fact that right now, today all models are very frequently turning out code that uses APIs that don't exist, syntax that doesn't exist, or basic logic failures.

The cold hard reality is that LLMs have been constantly showing us they don't understand a thing since... forever. Anyone who thinks they do have understanding hasn't been paying attention.

> i can prove that it does have understanding because it behaves exactly like a human with understanding does.

First, no it doesn't. See my previous examples that wouldn't have posed a challenge for any human with a pulse (or a pulse and basic programming knowledge, in the case of the programming examples). But even if it were true, it would prove nothing. There's a reason that in math class, teachers make kids show their work. It's actually fairly common to generate a correct result by incorrect means.

show 1 reply
bigfishrunningyesterday at 8:24 PM

We don't need to come up with a new board game. How about a board game that has been written about extensively for hundreds of years

LLMs can't consistently win at chess https://www.nicowesterdale.com/blog/why-llms-cant-play-chess

Now, some of the best chess engines in the world are Neural Networks, but general purpose LLMs are consistently bad at chess.

As far as "LLM's don't have understanding", that is axiomatically true by the nature of how they're implemented. A bunch of matrix multiplies resulting in a high-dimensional array of tokens does not think; this has been written about extensively. They are really good for generating language that looks plausible; some of that plausable-looking language happens to be true.

show 1 reply