logoalt Hacker News

contravariantyesterday at 3:58 PM1 replyview on HN

> All the people responding saying "You would never ask a human a question like this"

That's also something people seem to miss in the Turing Test thought experiment. I mean sure just deceiving someone is a thing, but the simplest chat bot can achieve that. The real interesting implications start to happen when there's genuinely no way to tell a chatbot apart.


Replies

TheJoeManyesterday at 8:52 PM

But it isn't just a brain-teaser. If the LLM is supposed to control say Google Maps, then Maps is the one asking "walk or drive" with the API. So I voice-ask the assistant to take me to the car wash, it should realize it shouldn't show me walking directions.