They define AGI in their charter
> artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work
This definition is not very precise though. For example, I think it can be argued from this definition that we had already reached AGI by the year 2010 (or earlier!). By 2010, computers were integrated into >50% economically valuable work, to the point that humans had mostly forgotten how to do them without computers. Drafting blueprints by hand was already a thing of the past, slide-rules were archaic, paper spreadsheets were long gone. You can debate whether these count as 'highly autonomous', but I don't think it's a clear slam-dunk either way. Not to mention dishwashers, textile weaving machines, CNC machines, assembly lines where >50% is automated, chemical/mineral refining operations, etc.
The definition reminds me of the common quip about robotics, "it's robotics when it doesn't work, once it works it's a machine".
is it most as an 50% of individual jobs? or able to produce 50% dollar for dollar?
what does "economically" means here? would it cover teaching? child care? healthcare? etc.
In my experience, AGI always seemed to be the stand-in phrase for "human like" intelligence, after AI was co-opted to mean simpler things like markov-chain chat bots and state machines that control agent behaviour in video games.
If the definition has shifted once again to mean "a computer program that does a task pretty well for us", then what's the new term we're using to define human-level artificial intelligence?
> economically valuable work
Is doing a ton of heavy lifting. What is considered economically valuable work is going to change from decade to decade, if not from year to year. What’s considered economically valuable also is going to be way different depending across individuals and nations within the exact same time frames too.
I take "outperform" to mean "can replace".
y'see, I would not define a system as "highly autonomous" if it only responds to requests.
And I get that there are workarounds; effectively a cron job every second prompting "do the next thing".
But in my personal definition of "highly autonomous" it would not need prompting at all. It would be thinking all the time, independently of requests.
That definition is as I said: "something about which no conclusions can be drawn because the proposed definitions lack sufficient precision and completeness."
"Highly autonomous systems" and "most economically valuable work" aren't precise enough to be useful.
"Highly" implies that there is a continuum, so where does directed end and autonomy begin?
"Most economically valuable work"... each word in that has wiggle room, not to mention that any reasonable interpretation of it is a shifting goalpost as the work done by humans over history has shifted a great deal.
The point is that none of this is defined in a way so that people can agree that something has AGI/ASI/etc. or not. If people can't agree then there's no point in talking about it.
EDIT: interestingly, the OpenAI definition of AGI specifically means that a subset of humans do not have AGI.