logoalt Hacker News

jcgrilloyesterday at 6:26 PM1 replyview on HN

Thanks, I will. Whether a computer program is outside the control of the original human or not (e.g. spawned a subprocess or something) is immaterial if we properly hold that human responsible for the consequences of running the computer program. If you run a computer program and it does something bad, then you did something bad. Simple, effective. If you don't trust the program to do good things, then simply don't run it. If you do run it, be prepared to defend your decision. Also that's how it currently works so we don't really need anything new. In this context "AI safety" is about bounding liability. So I guess you might care about it if you're worried about being held liable? The rest of us needn't give a shit if we can hold you accountable for your software's consequences, AI or no.


Replies

pixl97yesterday at 11:04 PM

>The rest of us needn't give a shit if we can hold you accountable for your software's consequences, AI or no.

See this is the fun thing about liability, we tend to attempt to limit scenarios were people can cause near unlimited damage when they have very limited assets in the first place. Hence why things like asymmetric warfare is so expensive to attempt to prevent.

But hey, have fun going after some teenager with 3 dollars to their name after they cause a billion dollars in damages.

show 1 reply