Nothing has convinced me that Linus Torvalds' approach is justified like the contemporary onslaught of AI spam and idiocy has.
AI users should fear verbal abuse and shame.
But they’re not interacting with an AI user, they’re interacting with an AI. And the whole point is that AI is using verbal abuse and shame to get their PR merged, so it’s kind of ironic that you’re suggesting this.
AI may be too good at imitating human flaws.
> AI users should fear verbal abuse and shame.
This is quite ironic since the entire issue here is how the AI attempted to abuse and shame people.
Perhaps a more effective approach would be for their users to face the exact same legal liabilities as if they had hand-written such messages?
(Note that I'm only talking about messages that cross the line into legally actionable defamation, threats, etc. I don't mean anything that's merely rude or unpleasant.)