logoalt Hacker News

adamgordonbellyesterday at 11:32 PM5 repliesview on HN

AGI’s 'general' is the wrong word, I thinkg. Humans aren’t general, we’re jagged. Strong in some areas, weak in others, and already surpassed in many domains.

LLM are way past us at languages for instance. Calculators passed us at calculating, etc.


Replies

onlyrealcuzzotoday at 1:27 AM

We don't call a calculator intelligent.

A calculator is extremely useful, but it is not intelligent.

A computer is extremely useful, but it is not intelligent.

Airplanes don't have wings, but they're damn sure useful, and also not intelligent.

If LLMs cannot learn to beat not-that-difficult of games better than young teens, they are not intelligent.

They are extremely useful. But they are not AGI.

Words matter.

show 3 replies
jwpapiyesterday at 11:39 PM

Interesting take.

Just to drive that thought further.

What are you suggesting, should we rename it. To me the fundamental question is this.

Do we still have tasks that humans can do better than AIs?.

I like the question. I think another good test is "make money". There are humans that can generate money from their laptop. I don’t think AI will be net positive.

I’ve tried to create a Polymarket trading bot with Opus 4.6. The ideas were full of logical fallacies and many many mistakes.

But also I’m not sure how they would compare against an average human with no statistics background..

I think it’s really to establish if we by AGI mean better than average human or better than best human..

show 1 reply
EternalFuryyesterday at 11:36 PM

We are jagged, but we can smooth that jaggedness if we choose to do so. LLMs stay jagged.

show 1 reply
Real_Egortoday at 2:00 AM

I’d actually focus on something else entirely here.

Let's be honest: we are giving LLMs and humans the exact same tasks, but are we putting them on an equal playing field? Specifically, do they have access to the same resources and behavioral strategies?

- LLMs don't have spatial reasoning.

- LLMs don't have a lifetime of video game experience starting from childhood.

- LLMs don't have working memory or the ability to actually "memorize" key parameters on the fly.

- LLMs don't have an internal "world model" (one that actively adapts to real-world context and the actual process of playing a game).

... I could go on, but I've outlined the core requirements for beating these tests above.

So, are we putting LLMs and humans in the same position? My answer is "no." We give them the same tasks, but their approach to solving them—let alone their available resources—is fundamentally different. Even Einstein wouldn't necessarily pass these tests on the first try. He’d first have to figure out how to use a keyboard, and then frantically start "building up new experience."

P.S. To quickly address the idea that LLMs and calculators are just "useful tools" that will never become AGI—I have some bad news there too. We differ from calculators architecturally; we run on entirely different "processors." But with LLMs, we are architecturally built the same way: it is a Neural Network that processes and makes decisions. This means our only real advantage over them is our baseline configuration and the list of "tools" connected to our neural network (senses, motor functions, etc.). To me, this means LLMs don't have any fundamental "architectural" roadblocks. We just have a head start, but their speed of evolution is significantly faster.

show 1 reply
suddenlybananastoday at 8:01 AM

LLMs haven't passed us in language, a child can learn language with so so much less data than an LLM can

show 1 reply