logoalt Hacker News

fgfarbenyesterday at 7:46 AM1 replyview on HN

I love reading posts like this. When you were a child, learning math or grammar, do you not remember bouncing off the walls of incorrect answers, eventually landing on a trajectory down the corridor of the right answer? Or were you always instantly zero-shotting everything?

In my experience, this is exactly how language models solve hard new problems, and largely how I solve them too. Propose a new idea, see if it works, iterate if not, keep going until it works.

Of course you can see how to solve a problem that you've seen before, like a visual puzzle about balanced parentheses. We're hyper specialized to visually identify asymmetries. LMs don't have eyes. Your mockery proves nothing.


Replies

calfyesterday at 9:16 AM

The mistake in these types of arguments is that natural, classical-artificial, and/or neural-net-artificial learning methods all employ some kind of counterexample/counterfactual reasoning, but their underlying methods could well be fundamentally different. Thus these arguments are invalid, until computer science advances enough to explain what the differences and similarities actually are.