Humans risk jail time, AIs not so much.
That reduces humans to the homo economicus¹:
> "Self-interest is the main motivation of human beings in their transactions" [...] The economic man solution is considered to be inadequate and flawed.[17]
An important distinction is that a human can *not* make pure rational decisions, or use complex deductions to make decisions on, such as "if I do X I will go to jail".
My point being: if AI were to risk jail time, it would still act different from humans, because (the current common LLMs) can make such deductions and rational decisions.
Humans will always add much broader contexts - from upbringing, via culture/religion, their current situation, to past experiences, or peer-consulting. In other words: a human may make an "(un)ethical" decision based on their social background, religion, a chat with a pal over a beer about the conundrum, their ability to find a new job, financial situation etc.
From an IBM training manual (1979):
>A computer can never be held accountable
>Therefore a computer must never make a management decision
The (EDITED) corollary would arguably be:
>Corporations are amoral entities which are potentially immortal who cannot be placed behind bars. Therefore they should never be given the rights of human beings.
(potentially, not absolutely immortal --- would wording as "not mortal by essence/nature"? be better?)
> Humans risk jail time, AIs not so much.
Do they actually though, in practice? How many people have gone to jail so far for "Violating ethics to improve KPI"?
The interesting logical conclusion from this is that we need to engineer in suffering to functionaly align a model.
Do they, really? Which CEO went to jail for ethical violations?
A remarkable number of humans given really quite basic feedback will perform actions they know will very directly hurt or kill people.
There are a lot of critiques about quite how to interpret the results but in this context it’s pretty clear lots of humans can be at least coerced into doing something extremely unethical.
Start removing the harm one, two, three degrees and add personal incentives and is it that surprising if people violate ethical rules for kpis?
https://en.wikipedia.org/wiki/Milgram_experiment