logoalt Hacker News

badgersnakeyesterday at 9:00 AM6 repliesview on HN

Humans risk jail time, AIs not so much.


Replies

IanCalyesterday at 10:13 AM

A remarkable number of humans given really quite basic feedback will perform actions they know will very directly hurt or kill people.

There are a lot of critiques about quite how to interpret the results but in this context it’s pretty clear lots of humans can be at least coerced into doing something extremely unethical.

Start removing the harm one, two, three degrees and add personal incentives and is it that surprising if people violate ethical rules for kpis?

https://en.wikipedia.org/wiki/Milgram_experiment

show 4 replies
berkesyesterday at 11:17 AM

That reduces humans to the homo economicus¹:

> "Self-interest is the main motivation of human beings in their transactions" [...] The economic man solution is considered to be inadequate and flawed.[17]

An important distinction is that a human can *not* make pure rational decisions, or use complex deductions to make decisions on, such as "if I do X I will go to jail".

My point being: if AI were to risk jail time, it would still act different from humans, because (the current common LLMs) can make such deductions and rational decisions.

Humans will always add much broader contexts - from upbringing, via culture/religion, their current situation, to past experiences, or peer-consulting. In other words: a human may make an "(un)ethical" decision based on their social background, religion, a chat with a pal over a beer about the conundrum, their ability to find a new job, financial situation etc.

¹ https://en.wikipedia.org/wiki/Homo_economicus

show 2 replies
WillAdamsyesterday at 11:58 AM

From an IBM training manual (1979):

>A computer can never be held accountable

>Therefore a computer must never make a management decision

The (EDITED) corollary would arguably be:

>Corporations are amoral entities which are potentially immortal who cannot be placed behind bars. Therefore they should never be given the rights of human beings.

(potentially, not absolutely immortal --- would wording as "not mortal by essence/nature"? be better?)

show 2 replies
embedding-shapeyesterday at 11:21 AM

> Humans risk jail time, AIs not so much.

Do they actually though, in practice? How many people have gone to jail so far for "Violating ethics to improve KPI"?

show 1 reply
WarmWashyesterday at 2:28 PM

The interesting logical conclusion from this is that we need to engineer in suffering to functionaly align a model.

newswasboringyesterday at 1:09 PM

Do they, really? Which CEO went to jail for ethical violations?

show 2 replies