logoalt Hacker News

pfischyesterday at 6:44 PM3 repliesview on HN

Even very young children with very simple thought processes, almost no language capability, little long term planning, and minimal ability to form long-term memory actively deceive people. They will attack other children who take their toys and try to avoid blame through deception. It happens constantly.

LLMs are certainly capable of this.


Replies

mikepurvisyesterday at 6:53 PM

Dogs too; dogs will happily pretend they haven't been fed/walked yet to try to get a double dip.

Whether or not LLMs are just "pattern matching" under the hood they're perfectly capable of role play, and sufficient empathy to imagine what their conversation partner is thinking and thus what needs to be said to stimulate a particular course of action.

Maybe human brains are just pattern matching too.

show 1 reply
sejjeyesterday at 6:51 PM

I agree that LLMs are capable of this, but there's no reason that "because young children can do X, LLMs can 'certainly' do X"

anonymous908213yesterday at 6:52 PM

Are you trying to suppose that an LLM is more intelligent than a small child with simple thought processes, almost no language capability, little long-term planning, and minimal ability to form long-term memory? Even with all of those qualifiers, you'd still be wrong. The LLM is predicting what tokens come next, based on a bunch of math operations performed over a huge dataset. That, and only that. That may have more utility than a small child with [qualifiers], but it is not intelligence. There is no intent to deceive.

show 6 replies