AGI is so nebulous we will never be able to tell if we hit it. We have hit human-level abilities in some narrow tasks, and are still leagues away in others. And humans have so vastly different skill-levels that we can't even agree what human-level really means. As bad as the economic definition of AGI in OpenAI's Microsoft deal is, at least it's measurable.
Imho that's a big part of why people are shifting to ASI. Not because we reached AGI, but because 'we reached ASI' is a well-defined verifiable statement, where 'we reached AGI' just isn't
AGI is so nebulous we will never be able to tell if we hit it. We have hit human-level abilities in some narrow tasks, and are still leagues away in others. And humans have so vastly different skill-levels that we can't even agree what human-level really means. As bad as the economic definition of AGI in OpenAI's Microsoft deal is, at least it's measurable.
Imho that's a big part of why people are shifting to ASI. Not because we reached AGI, but because 'we reached ASI' is a well-defined verifiable statement, where 'we reached AGI' just isn't