We've been hearing this for 3 years now. And especially 25 was full of "they've hit a wall, no more data, running out of data, plateau this, saturated that". And yet, here we are. Models keep on getting better, at more broad tasks, and more useful by the month.
Yes, and Moore's law took decades to start to fail to be true. Three years of history isn't even close to enough to predict whether or not we'll see exponential improvement, or an unsurmountable plateau. We could hit it in 6 months or 10 years, who knows.
And at least with Moore's law, we had some understanding of the physical realities as transistors would get smaller and smaller, and reasonably predict when we'd start to hit limitations. With LLMs, we just have no idea. And that could be go either way.
> We've been hearing this for 3 years now
Not from me you haven't!
> "they've hit a wall, no more data, running out of data, plateau this, saturated that"
Everyone thought Moore's Law was infallible too, right until they hit that bend. What hubris to think these AI models are different!
But you've probably been hearing that for 3 years too (though not from me).
> Models keep on getting better, at more broad tasks, and more useful by the month.
If you say so, I'll take your word for it.
> And yet, here we are.
I dunno. To me it doesn’t even look exponential any more. We are at most on the straight part of the incline.
Model improvement is very much slowing down, if we actually use fair metrics. Most improvements in the last year or so comes down to external improvements, like better tooling, or the highly sophisticated practice of throwing way more tokens at the same problem (reasoning and agents).
Don't get me wrong, LLMs are useful. They just aren't the kind of useful that Sam et al. sold investors. No AGI, no full human worker replacement, no massive reduction in cost for SOTA.