> Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do
> It can be debated whether arena.ai is a suitable metric for AGI, a strong case can probably be made for why it’s not. However, that’s irrelevant, as the spirit of the self-sacrifice clause is to avoid an arms race, and we are clearly in one.
No, the spirit is clearly meant for near AGI and we aren’t near AGI
The "I" in AGI stands for IPO.
The "S" stands for Safety.
AFAIK we're still working on a unified definition and testing theory for whatever "AGI" is.
Altman has personally claimed that we are close to AGI. Therefore, according to him, OpenAI should invoke the self-sacrifice clause.