logoalt Hacker News

atleastoptimalyesterday at 10:59 PM1 replyview on HN

What is fundamental to LLM's that make it impossible for them to infer intent?

All the limitations you are describing with respect to LLM's are the same as humans. Would a human tripping up on an ambiguously worded question mean they are always just faking their thinking?


Replies

Avicebrontoday at 12:04 AM

“We see emotion.”—We do not see facial contortions and make inferences from them … to joy, grief, boredom. We describe a face immediately as sad, radiant, bored, even when we are unable to give any other description of the features." (Wittgenstein)