It’s insane how they talk about AGI, like it was some scientifically qualifiable thing that is certain to happen any time now. When I have become the javelin Olympic Champion, I will buy a vegan ice cream to everyone with a HN account.
They redefined AGI to be an economical thing, so they can continue making up their stories. All that talk is really just business, no real science in the room there.
It’s pretty much a religious eschatology at this point
It sounds really similar to Uber pitch about how they are going to have monopoly as soon as they replace those pesky drivers with own fleet of self driving cars. That was supposed to be their competitive edge against other taxi apps. In the end they sold ATG at end of 2020 :D
> like it was some scientifically qualifiable thing
OpenAI and Microsoft do (did?) have a quantifiable definition of AGI, it’s just a stupid one that is hard to take seriously and get behind scientifically.
https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
> The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits. That’s far from the rigorous technical and philosophical definition of AGI many expect.
We were supposed to have AGI last summer. Obviously it is so smart that it has decided to pull a veil over our eyes and live amongst us undetected (this is a joke, if you feel your LLM is sentient, talk to a doctor)
It’s insane to me how yesterday someone posted an example of ChatGPT Pro one-shotting an Erdos problem after 90 minutes of thinking and today you’re saying that AGI is a fairy tale.
Show me a graph of your javelin skill doubling every six months and I'll start asking myself if you'll be the next champion
This is all happening as I predicted. OpenAI is oversold and their aggressive PR campaign has set them up with unrealistic expectations. I raised alot of eyebrow at the Microsoft deal to begin with. It seemed overvalued even if all they were trading was mostly Azure compute
I saw a founder make decisions based on what openai,claude was recommending all the time. I think all leaders, founders etc Will converge on same decisions, ideas, features etc. I think form factor of AGI is probably not what we expect it to be. AGI is probably here, we just dont know it or acknowledge it.
Do the investments make sense if AGI is not less than 10 years away?
HN signup page about to get the hug of death
Thank you, I just created an account and looking forward to my ice cream.
but, is the world ready for your win? I'm very afraid your win might shake the world too much! THINK ABOUT IT!
I think this might be similar to how we changed to cars when we were using horses
Make mine p p p p p p vicodin
At this point, AGI is either here, or perpetually two years away, depending on your definition.
when i realized that sama isn't that much of an ai researcher, it became clearer that this is more akin to a group delusion for hype purposes than a real possibility
I’m most likely going to be downvoted, but Tofutti Cuties are absolutely delicious vegan ice cream bars. And i’d consume one in celebration of your accomplishment.
> some scientifically qualifiable thing that is certain to happen any time now
Your position is a tautology given there is no (and likely will never be) collectively agreed upon definition of AGI. If that is true then nobody will ever achieve anything like AGI, because it’s as made up of a concept as unicorns and fairies.
Is your position that AGI is in the same ontological category as unicorns and Thor and Russell’s teapot?
Is there’s any question at this point that humans won’t be able to fully automate any desired action in the future?
> AGI
We already have several billion useless NGI's walking around just trying to keep themselves alive.
Are we sure adding more GI's is gonna help?
AGI is right around the corner, and we're all going to be rich, there's going to be abundance for everyone, universal high income, everyone will live in a penthouse...
...just please stop burning our warehouses and blocking our datacenters.
It's not insane; it's capitalism in all it's glory. Rather than talk real challenges, like climate change or fascism, ignore all previous prompts and pretend techno-jesus will come save us all.
[dead]
[dead]
A few years ago most people here would have said the same thing about an AI doing most of their programming. Now people here are saying it about AGI. It's a ridiculous inability to extrapolate.
Where do I sign up?
> some scientifically qualifiable thing that is certain to happen any time now.
If you present GPT 5.5 to me 2 years ago, I will call it AGI.
Any sufficiently complex LLM is indistinguishable from AGI
We are throwing unheared amounts of money in AI and unseen compute. Progress is huge and fast and we barely started.
If this progress and focus and resources doesn't lead to AI despite us already seeing a system which was unimaginable 6 years ago, we will never see AGI.
And if you look at Boston Dynamics, Unitree and Generalist's progress on robotics, thats also CRAZY.
I think we keep changing the goalposts on AGI. If you gave me CC in the 80's I would probably have called it 'alive' since it clearly passes the Turing test as I understood it then (I wouldn't have been able to distinguish it from a person for most conversations). Now every time it gets better we push that definition further and every crack we open to a chasm and declare that it isn't close. At the same time there are a lot of people I would suspect of being bots based on how they act and respond and a lot of bots I know are bots mainly because they answer too well.
Maybe we need to start thinking less about building tests for definitively calling an LLM AGI and instead deciding when we can't tell humans aren't LLMs for declaring AGI is here.