logoalt Hacker News

protocoltureyesterday at 11:16 PM8 repliesview on HN

>Humans must not anthropomorphise AI systems. That is, humans must not attribute emotions, intentions or moral agency to them. Anthropomorphism distorts judgement. In extreme cases, anthropomorphising can lead to emotional dependence.

Impossible. I anthropomorphise my chair when it squeaks. Humans anthropomorphise everything. They gender their cars and boats. This tool can actually make readable sentences and play a role.

You need to engineer around this, not make up arbitrary rules about using it.


Replies

dev_hugepagestoday at 8:30 AM

The problem is that humans use this as a coping mechanism for things they don't understand: I don't understand why the printer doesn't work, so I give it a mind of its own.

This is harmless for inconsequential stuff like a chair, but when it's an LLM, people should at least understand it's behavior so they don't get trapped. That means not trusting it with advice meant for the user or on things it has no concept of, like time or self-introspection (people ask the LLM after it acted, "Why did you delete my database?" when it has limited understanding of its own processing, so it falls back to, "You're right, I deleted the database. Here's what I did wrong: ... This is an irrecoverable mistake, blah, blah, blah..."

protocoltureyesterday at 11:23 PM

>>Humans must not anthropomorphise AI systems. That is, humans must not attribute emotions, intentions or moral agency to them. Anthropomorphism distorts judgement. In extreme cases, anthropomorphising can lead to emotional dependence.

Still angry about this. The reason humans ban animal cruelty is that animals look like they have emotions humans can relate to. LLMs are even better than animals at this. If you aren't gearing up for the inevitable LLM Rights movement you aren't paying attention. It doesn't matter if its artificial. The difference between a puppy and a cockroach is that we can relate better to the puppy. LLM rights movement is inevitable, whether LLMs experience emotions is irrelevant, because they can cause humans to have empathetic emotions and that's whats relevant.

show 9 replies
PhilipDainekotoday at 8:51 AM

Exactly. Furthermore, for this specific reason, AGI is not an objective term, but subjective: it is in my mind, I give you agency; only interacting with each other we invented a concept of agency

imrozimtoday at 4:13 AM

Yeah rules never work you just engineer around it I added a extra reviews steps on ai outputs because asking users to verify doesnt actually happen.

mock-possumtoday at 5:59 AM

Entirely possible - all it takes is self awareness / self control. If you know you do those things, then you have a choice.

show 2 replies
p-e-wtoday at 2:44 AM

Yup. That post is a typical example, symptomatic of modern technology culture, of calling for humans to change their nature in response to technology.

This is a fundamental mistake. It’s always the job of technology (indeed, its most important job) to work within the constraints of human nature, not the other way round. Being unable to do that is the defining characteristic of bad technology.

slimtoday at 8:34 AM

dude, we can literally deliberately dehumanize human beings. The way to egineer culture to "not enthropomorphize" anything is known and well documented

andaitoday at 4:12 AM

[flagged]