logoalt Hacker News

tyrustyesterday at 7:35 PM1 replyview on HN

> precise alignment with me personally, or even with my own business

Seems like a strawman, I don't think anyone means this when talking about alignment.

More general goals, like avoiding paperclip maximization, are broadly applicable to humanity.


Replies

zozbot234yesterday at 8:03 PM

If you've built an agent that can act even vaguely close to a paperclip maximizer, you've already solved 99.999% or more of the alignment problem. The hard part of alignment so far is getting the AI to do something useful in pursuit of the right goal, and not just waste energy. We still have no idea how to do this with any effectiveness: even modern "RL from verified feedback" systems are effectively toys, the equivalent of playing video games, not really of doing something useful in the real world.