FYI: The author has predicted that "AGI" will be here in 1-2 years and has staked his public reputation on it. He is personally invested in trendlines being lindy rather than sigmoid.
I don't think you can use lindy on trends as if trends are static objects, but that's another conversation.
AGI has become such a meaningless nondescript term, arguing when or how it is here has become pointless. Even OpenAI caved in and removed their AGI clause from their contract with Microsoft because they weren't fully sure that we are not there yet. The original ARC AGI was hailed as proof that AGI is not here yet, but now that ARC 1 and 2 got saturated, noone wanted to consider that perhaps we crossed the point where average humans are getting left behind. Frontier models are primarily limited by context and modality at this point, not by intelligence.
Ok, but you can just look at the METR curve. Mythos saturated the 50% time horizon. The 80% is now at 3 hours. The rate of progress is accelerating not slowing down. There’s no indication yet that this is a sigmoid!
> FYI: The author has predicted that "AGI" will be here in 1-2 years and has staked his public reputation on it. He is personally invested in trendlines being lindy rather than sigmoid.
I mean, that's called "having an opinion".
He only has 1.5 more months. If he's wrong he needs to own it. Same for Eliezer Yudkowsky. But these people have too much riding on their brands. No one has the courage to fess up to being wrong. Given how many podcasts he and others have been on professing this belief, it will be hard to just pretend otherwise.
Mind you, he is only personally invested insofar as he's staked his reputation on it. Throughout his writing, he expresses the same point over and over again: desperately wants AI to slow down, advocates for politics that would slow it down, and most likely nothing would bring him greater peace than to see a sigmoid curve appear.