logoalt Hacker News

varispeedyesterday at 11:50 AM5 repliesview on HN

There is nothing smart about current LLMs. They just regurgitate text compressed in their memory based on probability. None of the LLMs currently have actual understanding of what you ask them to do and what they respond with.


Replies

adamtaylor_13yesterday at 1:53 PM

If LLMs just regurgitate compressed text, they'd fail on any novel problem not in their training data. Yet, they routinely solve them, which means whatever's happening between input and output is more than retrieval, and calling it "not understanding" requires you to define understanding in a way that conveniently excludes everything except biological brains.

show 3 replies
bsenftneryesterday at 12:43 PM

We know that, but that does not make them unuseful. The opposite in fact, they are extremely useful in the hands of non-idiots.We just happen to have a oversupply of idiots at the moment, which AI is here to eradicate. /Sort of satire.

visargayesterday at 1:47 PM

So you are saying they are like copy, LLMs will copy some training data back to you? Why do we spend so much money training and running them if they "just regurgitate text compressed in their memory based on probability"? billions of dollars to build a lossy grep.

I think you are confused about LLMs - they take in context, and that context makes them generate new things, for existing things we have cp. By your logic pianos can't be creative instruments because they just produce the same 88 notes.

small_modelyesterday at 12:04 PM

Thats not how they work, pro-tip maybe don't comment until you have a good understanding?

show 2 replies
beyondCriticsyesterday at 12:40 PM

Just HI slop. Ask any decent model, it can explain what's wrong this this description.