logoalt Hacker News

Kim_Bruningyesterday at 8:18 PM1 replyview on HN

To amplify:

It's also potentially lethally stupid. What if an industrial robot arm decides to smash a €10000 expensive machine next door, or -heaven forbid- a human's skull. "It didn't really decide to do anything, stop anthropomorphising, let's blame the poor operator with his trembling fist on the e-stop."

Yeah, to heck with that. If you're one of those people (and you know who you are); you're overcompensating. We're going to need a root cause analysis, pull all the circuit diagrams, diagnose the code, cross check the interlocks, and fix the gorram actual problem. Policing language is not productive (and in the real life situation in the factory, please imagine I'm swearing and kicking things -scrap metal, not humans!- for real too) .

Just to be sure in this particular case with the Openclaw bot, the human basically pointed experimental level software at a human space and said "go". But I don't think they foresaw what happened next. They do have at least partial culpability here; but even that doesn't mean we get to just close our eyes, plug our ears, and refuse to analyze the safety implications of the system design an sich.

Shambaugh did a good job here. Even the Operator, however flawed, did a better job than just burning the evidence and running for the hills. Partial credit among the scorn to the latter.

(finally, note that there's probably 2.5 million of these systems out there now and counting, most -seemingly- operated by more responsible people. Let's hope)


Replies

zephenyesterday at 8:35 PM

All excellent points.

Unfortunately, your most excellent point:

> Policing language is not productive

goes against the grain here. Policing language is the one thing that our corporate overlords have gotten the right and the left to agree on. (Sure, they disagree on the details, but the first amendment is in graver danger now than it has been for a long time.)

https://www.durbin.senate.gov/newsroom/press-releases/durbin...