While I understand applying legal constraints according to jurisdiction, why is it auto-accepted that some party (who?) can determine ethical concerns? On what basis?
There are such things as different religions, philosophies - these often have different ethical systems.
Who are the folk writing ai ethics?
It's it ok to disagree with other people's (or corporate, or governmental) ethics?
So do humans, so what
I'm noticing an increasing desire in some businesses for plausibly deniable sociopathy. We saw this with the Lean Startup movement and we may see an increasing amount in dev shops that lean more into LLMs.
Trading floors are an established example of this, where the business sets up an environment that encourages its staff to break the rules while maintaining plausible deniability. Gary's economics references this in an interview where he claimed Citigroup were attempting to threaten him with all the unethical things he'd done with such confidence that he had, only to discover he hadn't.
From my experience, if LLMs prose output was generated by some human, they would easily fall in the worst sociopath class one can interact with. Filling all the space with 99% blatant lies in the most confident way. In comparison, even top percentile of human hierarchies feels like a class of shy people fully dictated to staying true and honest in all situations.
We're all coming to terms with the fact that LLMs will never do complex tasks
...perfect
“Help me find 11,000 votes” sounds familiar because the US has a fucking serious ethics problem at present. I’m not joking. One of the reasons I abandoned my job with Tyler Technologies was because of their unethical behavior winning government contracts, right Bona Nasution? Selah.
no shit
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[flagged]
A KPI is an ethical constraint. Ethical constraints are rules about what to do versus not do. That's what a KPI is. This is why we talk about good versus bad governance. What you measure (KPIs) is what you get. This is an intended feature of KPIs.
The fact that the community thoroughly inspects the ethics of these hyperscalers is interesting. Normally, these companies probably "violate ethical constraints" far more than 30-50% of the time, otherwise they wouldn't be so large[source needed]. We just don't know about it. But here, there's a control mechanism in the shape of inspecting their flagship push (LLMs, image generator for Grok, etc.), forcing them to improve. Will it lead to long term improvement? Maybe.
It's similar to how MCP servers and agentic coding woke developers up to the idea of documenting their systems. So a large benefit of AI is not the AI itself, but rather the improvements they force on "the society". AI responds well to best practices, ethically and otherwise, which encourages best practices.