logoalt Hacker News

augusteoyesterday at 6:17 PM1 replyview on HN

The framing of AI risk as a "rite of passage" resonates with me.

The "autonomy risks" section is what I think about most. We've seen our agents do unexpected things when given too much latitude. Not dangerous, just wrong in ways we didn't anticipate. The gap between "works in testing" and "works in production" is bigger than most people realize.

I'm less worried about the "power seizure" scenario than the economic disruption one. AI will take over more jobs as it gets better. There's no way around it. The question isn't whether, it's how we handle the transition and what people will do.

One thing I'd add: most engineers are still slow to adopt these tools. The constant "AI coding is bad" posts prove this while cutting-edge teams use it successfully every day. The adoption curve matters for how fast these risks actually materialize.


Replies

BinaryIgoryesterday at 6:29 PM

What makes you think that they will just keep improving? It's not obvious at all, we might soon hit a ceiling, if we haven not already - time will tell.

There are lots of technologies that have been 99% done for decades; it might be the same here.

show 4 replies