>Anytime I see "Artificial General Intelligence," "AGI," "ASI," etc., I mentally replace it with "something no one has defined meaningfully."
There are lots of meaningful definitions, the people saying we haven't reached AGI just don't use them. For most of the last half-century people would have agreed that machines that can pass the Turing test and win Math Olympiad gold are AGI.
The most pragmatic definition I know is OpenAI's own: "highly autonomous systems that outperform humans at most economically valuable work". Which is still something between skippetyboop and zingybang, as it leaves a ton of room for OAI to decide if that moment is reached, and also economically valuable work is a moving target.
If fooling people and doing math good are the criteria, we've had AGI for longer than we've had the modern internet.
Turing test is generally misunderstood, much like Schrodinger's cat, it has devolved in to a pop cultural meme. The test is to evaluate if a machine can think. Not if it is intelligent, not if it is human-like. Its dismissed as a useful by most experts in philosophy of mind, AI, language, etc..
Thinking cool and all but not that extraordinary. Even plants does it.
Firstly, the models that pass the Math Olympiad aren’t the same models as the ones you’re saying “pass the Turing test”. Secondly, nothing actually passes the Turing test. They pass a vibes check of “hey that’s pretty good!” but if your life depended on it, you could easily find ways to sniff out an LLM agent. Thirdly, none of these models learn in real time, which is an obviously essential feature.
We’ll know AGI when we see it, and this ain’t it. This complaining about changing goalposts is so transparently sour grapes from people over-invested in hyping the current LLM paradigm.