> Informal tone and mistakes actually signal that the message was written by a human
Except that this signal is now being abused. People add into the prompts requesting a few typos. And requesting an informal style.
There was a guy complaining about AI generated comments on substack, where the guy had noticed the pattern of spelling mistakes in the AI responses. It is common enough now.
But yes, typos do match the writer - you can still notice certain mistakes that a human might make that an AI wouldn't generate. Humans are good at catching certain errors but not others, so there is a large bias in the mistakes they miss. And keyboard typos are different from touch autoincorrection. AI generated typos have their own flavour.
Yeah, I'd argue a large portion of what LLMs are being used for can be characterized as "counterfeiting" traditionally-useful signals. Signals that told us there was another human on the other side of the conversation, that they were attentive, invested, smart, empathizing, etc.
Counterfeiting was possible before, but it had a higher bar because you had to hire a ghostwriter.