logoalt Hacker News

ngruhntoday at 3:07 AM4 repliesview on HN

I guess if you build the first AI that can autonomously self improve, then nobody can catch up anymore.


Replies

mike_hearntoday at 2:07 PM

This is a common canard. AI already autonomously self improves. All the training pipelines for modern frontier models are filled with AI. AI generates synthetic data, it cleans data, it judges output quality and feeds back via RL, it does hyperparameter tuning, it rewrites kernels for speed and a thousand other things.

But: no singularity. At least not yet.

The flaw in this thinking seems to be the idea that AI is a singular thing. You point the model back at its own source code, sit back and watch as it does everything at once. Right now it's more like AI being an army of assistants organized by human researchers. You often need specialized models for this stuff, you can't just use GPT for everything.

hattmalltoday at 3:41 AM

That seems really paradoxical and I think it would just burn up compute. The AI really doesn't have any way to know it's getting better without humans telling. As soon as the AI begins to recursively improve based on its own definition of improvement model collapse seems unavoidable.

show 1 reply
lukantoday at 3:40 AM

But if the second AI that can self improve comes up?

Then it all remains a question of who has the most compute power, as self improve seems compute heavy with the current approach.

techpressiontoday at 5:14 AM

If that happens catching up will be meaningless, everything we know and care about will change. You don’t have to be doomsday about it even, a self improving AI will quickly be more efficient than a human brain, all the data centers will be useless, tech companies will collapse (so will most others), everyone will have an incredible AI resource for the price of a hotdog. There’s no way it wouldn’t leak from whoever made it, either by people or by the AI itself.

show 1 reply