logoalt Hacker News

VorpalWayyesterday at 10:34 PM3 repliesview on HN

It is not just AGI that is poorly defined. Plain AI is moving goalposts too. When the A* search algorithm was introduced in the late 60s, that was considered AI, when SVM (support vector machines) and KNN (K nearest neighbor) were new, they were AI. And so on.

These days it is neural networks and transformer models for language in particular that people mean when they say unqualified AI.

It is very hard to have a meaningful discussion when different parties mean different things with the same words.


Replies

Wowfunhappyyesterday at 10:38 PM

I really wish I could wave a magic wand and make everyone stop using the term "AI". It means everything and nothing. Say "machine learning" if that's what you mean.

show 2 replies
dataflowtoday at 12:02 AM

I think the Turing test ought to be fine, but we need to be less generous to the AI when executing it. If there exists any human that can consistently tell your AI apart from humans without without insider knowledge, then I don't think you can claim to have AGI. Even if 99.9% of humans can't tell you apart.

So I'm very curious if any AI we have today would pass the Turing test under all circumstances, for example if: the examiner was allowed to continue as long as they wanted (even days/weeks), the examiner could be anybody (not just random selections of humans), observations other than the text itself were fair game (say, typing/response speed, exhaustion, time of day, the examiner themselves taking a break and asking to continue later), both subjects were allowed and expected to search on the internet, etc.

show 1 reply
marcus_holmestoday at 4:01 AM

Agree. I talk about LLMs when discussing them, and avoid the term "AI" unless I'm talking about the entire industry as a whole. I find it really helps to be specific in this case.