> being the type of slurry that pre-AI was easily avoided by staying off of LinkedIn
This is why I'm rarely fully confident when judging whether or not something was written by AI. The "It's not this. It's that" pattern is not an emergent property of LLM writing, it's straight from the training data.
Its prevalence in contexts that aren't "LinkedIn here's what I learnt about B2B sales"-peddling are an emergent property of LLM writing. Like, 99% of articles wouldn't have a single usage of it pre-LLMs. This article has like 6 of them.
And even if you remove all of them, it's still clearly AI.
People have hated the LinkedIn-guru style since years before AI slop became mainstream. Which is why the only people who used it were.. those LinkedIn gurus. Yet now it's suddenly everywhere. No one wrote articles on topics like malware in this style.
What's so revolting about it is that it just sounds like main character syndrome turned up to 11.
> This wasn’t an isolated case. It was a campaign.
This isn't a bloody James Bond movie.
I don't agree. I have two theories about these overused patterns, because they're way over represented
One, they're rhetorical devices popular in oral speech, and are being picked up from transcripts and commercial sources eg, television ads or political talking head shows.
Two, they're popular with reviewers while models are going through post training. Either because they help paper over logical gaps, or provide a stylistic gloss which feels professional in small doses.
There is no way these patterns are in normal written English in the training corpus in the same proportion as they're being output.