Any serious LLM user will tell you that there's no way to get from LLM to AGI.
These models are vast and, in many ways, clearly superhuman. But they can't venture outside their training data, not even if you hold their hand and guide them.
Try getting Suno to write a song in a new genre. Even if you tell it EXACTLY what you want, and provide it with clear examples, it won't be able to do it.
This is also why there have been zero-to-very-few new scientific discoveries made by LLM.
I mean yeah, but that's why there are far more research avenues these days than just pure LLMs, for instance world models. The thinking is that if LLMs can achieve near-human performance in the language domain then we must be very close to achieving human performance in the "general" domain - that's the main thesis of the current AI financial bubble (see articles like AI 2027). And if that is the case, you still want as much compute as possible, both to accelerate research and to achieve greater performance on other architectures that benefit from scaling.
Most humans aren't making new scientific discoveries either, are they? Does that mean they don't have AGI?
Intelligence is mostly about pattern recognition. All those model weights represent patterns, compressed and encoded. If you can find a similar pattern in a new place, perhaps you can make a new discovery.
One problem is the patterns are static. Sooner or later, someone is going to figure out a way to give LLMs "real" memory. I'm not talking about keeping a long term context, extending it with markdown files, RAG, etc. like we do today for an individual user, but updating the underlying model weights incrementally, basically resulting in a learning, collective memory.