This is one of the few articles where I noticed a bunch of LLM-isms and still read to the end because it was interesting.
It's because there's clearly a near-1:1 ratio of input to output. I also noticed some LLMisms, and I suspect the author may have ran the text (perhaps in the form of a large number of bullet points) through an LLM. But because he's using the LLM to clean instead of multiply, it's still worth reading.
I didn't see any LLM-isms. Emdashes I guess, but I expect those in actual articles, they're only fishy in social media comments.
LLM-isms are tolerably bad. LLM's narrative ability is intolerably terrible. As others said, because a human actually wrote the overall narration for this, it was still compelling to read. The mistake would be skipping a well-narrated and thoughtful article just because of a few bad LLMisms.
I think LLM's lack of "theory of mind" leads to them severely underperforming on narration and humor.
It doesn't read like an LLM to me. What are you seeing?
I bailed, it just really kills my desire to keep reading.
Hi! I work at IEEE Spectrum and there's no way an LLM wrote this. We have a pretty strict Generative AI use policy (bottom of this page https://spectrum.ieee.org/about). I'm guessing this is from writers using actual writing techniques that Gen AI stole from...