Personally I'm not even sold on the current paradigm being too limited to produce AGI - there are still several OOMs worth of compute increase available, plus the algorithmic improvements have overall been accumulating faster than predicted.
But even assuming that a major breakthrough is required, it seems ludicrous to me to go from that to a timeline of a decade or more. This isn't like fusion power research, where you spend 10 years building a new installation only to find new problems. Software development is inherently faster, and AI research in particular has been moving extremely quickly in the past. (GPT-3 is only 6 years old.) I don't think a wall in AI progress, if one comes at all, will last more than a few years.
We’ve been doing AI research since the 1950’s, and as with most other fields there are peaks and valleys. History books are filled with promises of breakthrough that never happened, even though at one time they were all ”very close”.
> there are still several OOMs worth of compute increase available
Where and how? Aren't we reaching the physical limits of making transistors smaller?