If you've built an agent that can act even vaguely close to a paperclip maximizer, you've already solved 99.999% or more of the alignment problem. The hard part of alignment so far is getting the AI to do something useful in pursuit of the right goal, and not just waste energy. We still have no idea how to do this with any effectiveness: even modern "RL from verified feedback" systems are effectively toys, the equivalent of playing video games, not really of doing something useful in the real world.
If you've built an agent that can act even vaguely close to a paperclip maximizer, you've already solved 99.999% or more of the alignment problem. The hard part of alignment so far is getting the AI to do something useful in pursuit of the right goal, and not just waste energy. We still have no idea how to do this with any effectiveness: even modern "RL from verified feedback" systems are effectively toys, the equivalent of playing video games, not really of doing something useful in the real world.