logoalt Hacker News

ibestvinayesterday at 10:22 AM1 replyview on HN

There's a whole industry of "illusions" humans fail for: optical, word plays (including large parts of comedy), the Penn & Teller type, etc. Yet no one claims these are indicators that humans lack some critical capability.

Surface of "illusions" for LLMs is very different from our own, and it's very jagged: change a few words in the above prompt and you get very different results. Note that human illusions are very jagged too, especially in the optical and auditory domains.

No good reason to think "our human illusions" are fine, but "their AI illusions" make them useless. It's all about how we organize the workflows around these limitations.


Replies

raincoleyesterday at 10:26 AM

> No good reason to think "our human illusions" are fine, but "their AI illusions" make them useless.

I was about to argue that human illusions are fine because humans will learn the mistakes after being corrected.

But then I remember what online discussions over Monty Hall problem look like...

show 1 reply