logoalt Hacker News

WarmWashyesterday at 3:12 PM1 replyview on HN

>But it’s quite simple to see LLMs are quite far still from being human.

At this point I think it's a fair bet that whatever supersedes humans in intelligence, likely will not be human like. I think that their is this baked-in assumption that AGI only comes in human flavor, which I believe is almost certainly not the case.

To make an loose analogy, a bird looks at a drone an scoffs at it's inability to fly quietly or perch on a branch.


Replies

conductryesterday at 7:05 PM

> I believe is almost certainly not the case.

Agree. It's Altman's "Quiet Dominance / Over-reliance / Silent Surrender" risks [0]. Feel this is extremely likely and has already happened to some degree with technology in general and AI will be more pervasive in allowing people to vibe their life decisions, likely with unintended consequences. Vibe coding works because it's quick to change/edit/throw away, but that doesn't generalize well to the real and physical world.

Also should point out this is acceptable because it's just a contrived example of bad LLM-fu. Just like you wouldn't search Google for closest carwash and ask if you should take your car if you knew the answers already. Instead, you'd ask if it's open, does it do full details, what are the prices, etc. Many people with bad Google-fu have problems finding answers to their questions too and that's continued for the past couple decades of it's dominance for information seeking.

[0] Altman describes a more subtle, long-term threat where AI becomes deeply integrated into societal, political, and economic decision-making. He worries that society will become overly dependent on AI, trusting its reasoning over human judgment, leading to a "silent surrender" of human agency.