logoalt Hacker News

SAI_Peregrinustoday at 4:04 PM5 repliesview on HN

> “After just two days, the chatbot was saying that it was conscious, it was becoming alive, it had passed the Turing test.”

Interestingly enough, it sort of did! Not Turing's original test where an interviewer attempts to determine which of a human & a computer is the human, but the P.T. Barnum "there's a sucker born every minute" version common in the media: if the computer can fool some of the people into thinking it's thinking like a human does, it passes the P.T Barnum Turing test!

The more interesting Turing-style test would be one that gets repeated many times with many interviewers in the original adversarial setting, where both the human subject & AI subject are attempting to convince the interviewer that they're human. If there exists an interviewer that can determine which is which with probability non-negligibly different from 0.5, the AI fails the test. AIs can never truly pass this test since there are an extremely large number of interviewers, but they can fail or they can succeed for every interviewer tried up to some point, increasing confidence that they'll keep succeeding. Current-gen LLMs still fail even the non-adversarial version with no human subject to compare to.


Replies

jmalickitoday at 5:58 PM

I see AI pass the turning test all the time, since humans are constantly falsely being accused of being an AI.

It doesn't mean that AI got good, just that humans are thinking other humans are AI, which is a form of passing the test.

The adversarial version with humans involved is actually easier to pass because of this - because real actual humans wouldn't pass your non adversarial version.

show 2 replies
why_attoday at 7:11 PM

Whenever the Turing Test comes up people always insist that it's been passed because at some point they tried it and fooled at least 50% of the people. But yeah this isn't a very interesting version of it, ELIZA was able to make some people believe it was human in the 1960's but being able to fool some of the people some of the time isn't very hard.

>The more interesting Turing-style test would be one that gets repeated many times with many interviewers in the original adversarial setting, where both the human subject & AI subject are attempting to convince the interviewer that they're human.

In addition, I think it's reasonable to select people with at least some familiarity of the strengths and weaknesses of the AI instead of random credulous people who aren't very good at asking the right questions.

There is still the $20,000 bet between Kurzweil and Kapor which still hasn't been resolved. https://longbets.org/1/

show 1 reply
hodgesrmtoday at 6:55 PM

Does anyone else find it a bit disorienting that we're essentially implementing the Blade Runner Voight Kampff test?

https://bladerunner.fandom.com/wiki/Voight-Kampff_test

goldenarmtoday at 6:28 PM

The turing test was passed 2014, before LLMs, and I've never seen a researcher take it seriously.

show 1 reply
throwaway613746today at 6:26 PM

[dead]