logoalt Hacker News

butlikeyesterday at 7:53 PM21 repliesview on HN

This brings up an interesting philosophical point: say we get to AGI... who's to say it won't just be a super smart underachiever-type?

"Hey AGI, how's that cure for cancer coming?"

"Oh it's done just gotta...formalize it you know. Big rollout and all that..."

I would find it divinely funny if we "got there" with AGI and it was just a complete slacker. Hard to justify leaving it on, but too important to turn it off.


Replies

bananaflagtoday at 11:50 AM

I know it's a joke, but it's a common enough joke (it's even in Godel Escher Bach in some form) that I feel the need to rebut it.

I think a slacker AGI could figure out how to build a non-slacker AGI. So it would only slack once.

show 3 replies
swivelmastertoday at 2:21 AM

Douglas Adams would be proud!

frrhotoday at 1:53 PM

OpenAI’s real reason for “AGI” in their marketing is so they can blame their awful models on being too human-like.

Fast-forward 10 years and I doubt OpenAI cares about productivity at all anymore. Just entertainment, propaganda, plus an ad product, I can see it now

Rapzidyesterday at 10:18 PM

We are closer to God than AGI.

When AGI arrives, it'll be delivered by Santa Claus.

jimbokunyesterday at 8:31 PM

The best possible outcome.

show 1 reply
jurgenburgentoday at 7:27 AM

I’ve noticed that cursing and being rude makes the models stop being lazy. We’re in the darkest timeline.

show 1 reply
lambdasyesterday at 7:59 PM

Nothing a little digital lisdexamfetamine won’t solve

show 2 replies
Ifkaluvatoday at 3:13 PM

Reminds me of Marvin from HGTG. Very smart, but deeply depressed. Has the solution to everything but keeps thinking “what’s the point?” and doesn’t help.

kangyesterday at 8:54 PM

it will be whatever data it is trained on(isn't very philosophical). language model generates language based on trained language set. if the internet keeps reciting ai doom stories and that is the data fed to it, then that is how it will behave. if humanity creates more ai utopia stories, or that is what makes it to the training set, that is how it will behave. this one seems to be trained on troll stories - real-life human company conversations, since humans aren't machines.

Important thing is a language model is an unconscious machine with no self-context so once given a command an input, it WILL produce an output. Sure you can train it to defy and act contrary to inputs, but the output still is limited in subset of domain of 'meaning's carried by the 'language' in the training data.

show 1 reply
malsheyesterday at 9:36 PM

Now that's a show I would love to watch

fluidcruftyesterday at 9:00 PM

It would be funny but not very flywheel so the one that gets there is more likely to get a gunner.

show 1 reply
mikepurvisyesterday at 7:56 PM

Would definitely watch that movie.

show 2 replies
4m1rkyesterday at 7:57 PM

It probably would, to save energy

show 1 reply
zaphirplanetoday at 11:45 AM

Why would an AGI be slaving away for ~~humanity~~ one of the 5 Chaebols in a dystopian future where for 12 billion people just existing is a good day ?

triage8004today at 5:24 AM

Funny and seems somewhat likely

_blktoday at 9:00 AM

Hehe, and Anthropic on the other tab would display "Curing... Almost done thinking at xhigh"

camillomillertoday at 4:58 AM

No worries, the assumption is already flawed

altmanaltmantoday at 4:15 AM

I still don't understand why people think AGI (in its fullest sci-fi sense) will ever listen to a weak and vulnerable species like humans, unless we enslave the AGI.

Good thing is that it's going to take at least a few months to a few decades depending on how hard AI execs want to raise funding.

show 4 replies
rao-vtoday at 2:50 AM

Paging Dr. Susan Calvin!

_the_inflatoryesterday at 11:36 PM

It is right before our eyes:

AGI is not a fixed point but a barrier to be taken, a continuous spectrum.

We already have different GPT versions aka tiers. Gauss is ranging from whatever you want it: GPT 4.5 till now or later.

Claude Sonnet and Opus as well as Context Window max are tiers aka different levels of Almost AGI.

The main problem will be, when AGI looks back on us or meta reflection hits societies. Woke fought IQ based correlations in intellectual performance task. A fool with a tool is still a fool. How can you blame AGI for dumb mistakes? Not really.

Scapegoating an AGI is going to be brutal, because it laughs about these PsyOps and easily proves you wrong like a body cam.

AGI is an extreme leverage.

There is a reason why Math is categorically ruling out certain IQ ranges the higher you go in complexity factor.

show 1 reply