logoalt Hacker News

pixl97yesterday at 4:57 PM1 replyview on HN

I mean, you can think whatever you want. As we make agents and give them agency expect them to do things outside of the original intent. The big thing here is agents spinning up secondary agents, possibly outside the control of the original human. We have agentic systems at this level of capability now.


Replies

jcgrilloyesterday at 6:26 PM

Thanks, I will. Whether a computer program is outside the control of the original human or not (e.g. spawned a subprocess or something) is immaterial if we properly hold that human responsible for the consequences of running the computer program. If you run a computer program and it does something bad, then you did something bad. Simple, effective. If you don't trust the program to do good things, then simply don't run it. If you do run it, be prepared to defend your decision. Also that's how it currently works so we don't really need anything new. In this context "AI safety" is about bounding liability. So I guess you might care about it if you're worried about being held liable? The rest of us needn't give a shit if we can hold you accountable for your software's consequences, AI or no.

show 1 reply