logoalt Hacker News

Editor's Note: Retraction of article containing fabricated quotations

82 pointsby bikenagatoday at 6:29 PM76 commentsview on HN

Comments

water-data-dudetoday at 9:20 PM

Benj Edwards, one of the authors, accepted responsibility in a bluesky post[0]. He lists some extenuating circumstances[1], but takes full responsibility. Time will tell if it's a one-off thing or not I guess.

[0] https://bsky.app/profile/benjedwards.com/post/3mewgow6ch22p

[1] your mileage may vary on how much you believe it and how much slack you want to cut him if you do

show 1 reply
j0057today at 7:18 PM

Odd that there's no link to the retracted article.

Thread on Arstechnica forum: https://arstechnica.com/civis/threads/editor%E2%80%99s-note-...

The retracted article: https://web.archive.org/web/20260213194851/https://arstechni...

show 2 replies
andrewflnrtoday at 7:21 PM

People put a lot of weight on blame-free post-mortems and not punishing people who make "mistakes", but I believe that has to stop at the level of malice. Falsifying quotes is malice. Fire the malicious party or everything else you say is worthless.

show 6 replies
delichontoday at 8:39 PM

Imagine a future news environment where oodles of different models are applied to fact check most stories from most major sources. The markup from each one is aggregated and viewable.

A lot of the results would be predictable partisan takes and add no value. But in a case like this where the whole conversation is public, the inclusion of fabricated quotes would become evident. Certain classes of errors would become lucid.

Ars Technica blames an over reliance on AI tools and that is obviously true. But there is a potential for this epistemic regression to be an early stage of spiral development, before we learn to leverage AI tools routinely to inspect every published assertion. And then use those results to surface false and controversial ones for human attention.

show 3 replies
mzajctoday at 7:33 PM

What are they changing to prevent this from happening in the future? Why was the use of LLMs not disclosed in the original article? Do they host any other articles covertly generated by LLMs?

As far as I can tell, the pulled article had no obvious tells and was caught only because the quotes were entirely made up. Surely it's not the only one, though?

show 1 reply
mrandishtoday at 7:49 PM

When an article is retracted it's standard to at least mention the title and what specific information was incorrect so that anyone who may have read, cited or linked it is informed what information was inaccurate. That's actually the point of a retraction and without it this non-standard retraction has no utility except being a fig leaf for Ars to prevent external reporting becoming a bigger story.

In the comments I found a link to the retracted article: https://arstechnica.com/ai/2026/02/after-a-routine-code-reje.... Now that I know which article, I know it's one I read. I remember the basic facts of what was reported but I don't recall the specifics of any quotes. Usually quotes in a news article support or contextualize the related facts being reported. This non-standard retraction leaves me uncertain if all the facts reported were accurate.

It's also common to provide at least a brief description of how the error happened and the steps the publication will take to prevent future occurrences.. I assume any info on how it happened is missing because none of it looks good for Ars but why no details on policy changes?

Edit to add more info: I hadn't yet read the now-retracted original article on achive.org. Now that I have I think this may be much more interesting than just another case of "lazy reporter uses LLM to write article". Scott, the person originally misquoted, also suspects something stranger is going on.

> "This blog you’re on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn’t figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn’t access the page it generated these plausible quotes instead, and no fact check was performed." https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...

My theory is a bit different than Scott's: Ars appears to use an automated tool which adds text links to articles to increase traffic to any related articles already on Ars. If that tool is now LLM-based to allow auto-generating links based on concepts instead of just keywords, perhaps it mistakenly has unconstrained access to changing other article text! If so, it's possible the author and even the editors may not be at fault. The blame could be on the Ars publishers using LLMs to automate monetization processes downstream of editorial. Which might explain the non-standard vague retraction. If so, that would make for an even more newsworthy article that's directly within Ars' editorial focus.

show 3 replies
sevgtoday at 8:29 PM

Feels like nail in the coffin, Ars has already been going downhill for half a decade or more.

I unsubscribed (just the free rss) regardless of their retraction.

QuadrupleAtoday at 8:46 PM

Glib observation, but this sounds quite generic and AI-written.

jmward01today at 8:12 PM

I see a lot of negative comments on this retraction about how they could have done it better. Things can always be done better but I think the important thing is that they did it at all. Too many 'news' outlets today just ignore their egregious errors, misrepresentations and outright lies and get away with it. I find it refreshing to see not just a correction, but a full retraction of this article. We need to encourage actual journalistic integrity when we see it, even if it is imperfect. This retraction gives me more faith in future articles from them since I know there is at least some editorial review, even if it isn't perfect.

anonymous908213today at 7:00 PM

Zero repercussions for the senior editor involved in fabricating quotations (they neglect to even name the culprit), so this is essentially an open confession that Ars has zero (really, negative) journalistic integrity and will continue to blatantly fabricate articles rather than even pretending to do journalism, so long as they don't get caught. To get to the stage where an editor who has been at the company for 14 years is allowed to publish fraudulent LLM output, which is both plagiarism (claiming the output as his own), and engaging in the spread of disinformation by fabricating stories wholesale, indicates a deep cultural rot within the organisation that should warrant a response deeper than "oopsie". The publication of that article was not an accident.

show 1 reply
unethical_bantoday at 7:26 PM

Who got fired?

show 2 replies
add-sub-mul-divtoday at 7:07 PM

> We have covered the risks of overreliance on AI tools for years

If the coverage of those risks brought us here, of what use was the coverage?

Another day, another instance of this. Everyone who warned that AI would be used lazily without the necessary fact-checking of the output is being proven right.

Sadly, five years from now this may not even result in an apology. People might roll their eyes at you for correcting a hallucination they way they do today if you point out a typo.

show 1 reply
knowitnone3today at 8:55 PM

[dead]

usefulpostertoday at 6:31 PM

tl;dr: We apologize for getting caught. Ars Subscriptors in the comments thank Ars for their diligence in handling an editorial fuckup that wasn't identified by Ars.

show 2 replies