Even if it didn't fabricate quotes wholesale, taking an LLM's output and claiming it as your own writing is textbook plagiarism, which is malicious intent. Then, if you know that LLMs are next-token-prediction-engines that have no concept of "truth" and are programmed solely to generate probabilistically-likely text with no specific mechanism of anchoring to "reality" or "facts", and you use that output in a journal that (ostensibly) exists for the reason of presenting factual information to readers, you are engaging in a second layer of malicious intent. It would take an astounding level of incompetence for a tech journal writer to not be aware of the fact that LLMs do not generate factual output reliably, and it beggars belief given that one of the authors has worked at Ars for 14 years. If they are that incompetent, they should probably be fired on that basis anyways. But even if they are that incompetent, that still only covers one half of their malicious intent.
This is silly. LLMs are not people; you can’t “plagiarize” an LLM. Either the result is good or it isn’t, but it’s the actual author’s responsibility either way.
The article in question appears to me to be written by a human (excluding what's in quotation marks), but of course neither of us has a crystal ball. Are there particular parts of it that you would flag as generated?
Honestly I'm just not astounded by that level of incompetence. I'm not saying I'm impressed or that's it's okay. But I've heard much worse stories of journalistic malpractice. It's a topical, disposable article. Again, that doesn't justify anything, but it doesn't surprise me that a short summary of a series of forum exchanges and blog posts was low effort.