AGI is so nebulous we will never be able to tell if we hit it. We have hit human-level abilities in some narrow tasks, and are still leagues away in others. And humans have so vastly different skill-levels that we can't even agree what human-level really means. As bad as the economic definition of AGI in OpenAI's Microsoft deal is, at least it's measurable.
Imho that's a big part of why people are shifting to ASI. Not because we reached AGI, but because 'we reached ASI' is a well-defined verifiable statement, where 'we reached AGI' just isn't
> 'we reached ASI' is a well-defined verifiable statement, where 'we reached AGI' just isn't
So... we can't tell when the rocket has left Earth atmosphere, but we can tell when the rocket has entered space?
I'm not getting how "superior in all tasks" is better-defined for you than "equal in all tasks".
> AGI is so nebulous we will never be able to tell if we hit it.
I completely agree. We can't even measure each other well, let alone machines.
I feel like AI is actually helping us getting a better understanding of what "human intelligence" really is.
I remember when computers became better than humans at chess, many people were shocked and saw that as machines becoming more intelligent than humans. Because being good at chess what considered equivalent to "being smart".