logoalt Hacker News

breuleuxyesterday at 9:03 PM1 replyview on HN

The point is that "predicting the next token" is such a general mechanism as to be meaningless. We say that LLMs are "just" predicting the next token, as if this somehow explained all there was to them. It doesn't, not any more than "the brain is made out of atoms" explains the brain, or "it's a list of lists" explains a Lisp program. It's a platitude.


Replies

esafaktoday at 1:10 AM

It's not meaningless, it's a prediction task, and prediction is commonly held to be closely related if not synonymous with intelligence.