logoalt Hacker News

MarkusQyesterday at 3:46 PM1 replyview on HN

So substitute another phrase, if you prefer. It doesn't change the logic.

"Specifically, we define a formal world where bungling is defined as inconsistencies between a computable LLM and a computable ground truth function. By employing results from learning theory, we show that LLMs cannot learn all the computable functions and will therefore inevitably bungle if used as general problem solvers."


Replies

red75primeyesterday at 4:37 PM

Their diagonalization argument applies to any system that uses finite training data. Calling such a system "LLM" is an (unintentional) red herring.

show 1 reply