logoalt Hacker News

stratos123yesterday at 7:13 PM3 repliesview on HN

> AGI isn't going to happen within the next 30 years so this is moot. The actual researchers have said so many times. It's only the business people and laypeople whooping about AGI always being imminent.

The statements of what "actual researchers" are you relying upon for your "next 30 years" estimate? How do you reconcile them with the sub-10- or even sub-5-years timelines of other AI researchers, like Daniel Kokotajlo[1] or Andrej Karpathy[2]? For that matter, what about polls of AI researchers, which usually obtain a median much shorter than 30 years [3]?

[1] https://x.com/DKokotajlo/status/1991564542103662729

[2] https://x.com/karpathy/status/1980669343479509025

[3] https://80000hours.org/2025/03/when-do-experts-expect-agi-to...


Replies

MadxX79yesterday at 7:32 PM

I'm guessing they have a lot of shares in the AI companies they work(ed) for, and they would like to pump their value so they can buy an even nicer carribean island than they can already afford?

show 2 replies
Spivakyesterday at 7:44 PM

See this is a fun game because when you're fishing for a breakthrough you can predict tomorrow or 100 years. Nobody, even experts have any idea until it happens and they're holding it in their hands. To have any kind of accurate prediction you would have to have already observed other civilizations discover AGI to say how close the environment is to even be capable of making the leap. We could be missing something huge, we could need multiple seemingly unrelated breakthroughs to get there. We're for sure closer, but we could still be miles away, GPTs might even barking up the wrong tree.

Why this discussion is already annoying and poised to get so much worse is because now hundred billion dollar companies have a direct financial incentive to say they did it so I expect the definition will get softened to near meaninglessness so some marketing department can slap AGI on their thing.

linkregisteryesterday at 7:46 PM

I think you are overindexing on the integer value given in the parent post, rather than seeing the essence that LLMs in their current form only excel on tasks they have been specifically trained for.

Karpathy himself has publicly stated that AGI itself is only possible with a new paradigm (that his group is working toward). He claims RHLF and attention models are near the end of their logarithmic curve. The concept of the "self-training AI" is likely impossible without a new kind of model.

We will likely see some classes of human skills completely taken over by LLMs this decade: call centers (already capable in 2026), SWE (the next couple years). Bear in mind the frontier labs have spend many billions on exhaustive training on every aspect of these domains. They are focusing training on the highest value occupations, but the long tail is huge.

It will be interesting to see if this investment will be obviated by a "real AGI" capable of learning without going through the capital-intensive training steps of current models.

show 1 reply