Current response from one of the more senior Ars folk:
https://arstechnica.com/civis/threads/journalistic-standards...
(Paraphrasing: Story pulled over potentially breaching content policies, investigating, update after the weekend-ish.)
The context here is this story, an AI Agent publishs a hit piece on the Matplotlib maintainer.
https://news.ycombinator.com/item?id=46990729
And the story from ars about it was apparently AI generated and made up quotes. Race to the bottom?
The story is credited to Benj Edwards and Kyle Orland. I've filtered out Edwards from my RSS reader a long time ago, his writing is terrible and extremely AI-enthusiastic. No surprise he's behind an AI-generated story.
This is fascinating because Ars has probably _the most_ anti-AI readership of the tech publications. If the author did use AI to generate the story (or even help) their will be rioting for sure
The original story for those curious
https://web.archive.org/web/20260213194851/https://arstechni...
One question is should the writer be dismissed from staff. Or can they stay on at Ars if for example, it was explained as an unintentional mistake while using an LLM to restructure his own words and it accidentally inserted the quotes and slipped through. We’re all going through a learning process with this AI stuff right?
I think for some people this could be a redeemable mistake at their job. If someone turns in a status report with a hallucination, that’s not good clearly but the damage might be a one off / teaching moment.
But for journalists, I don’t think so. This is crossing a sacred boundary.
This is a bummer. Ars is one of the few news sources I consistently read. I give them money because I use an ad blocker and want to support them.
I have noticed them doing more reporting on reporting. I am sure they are cash strapped like everyone. There are some pretty harsh critics here. I hope they, too are paying customers or allowing ads. Otherwise, they are just pissing into the wind.
Oh my goodness. I hope the Matplotlib maintainer is holding it together, must be terrible for him. It's like being run over by press car after having an accident.
I used to go to Ars daily, loved them... but at some point during the last 5 years or so they decided to lean into politics and that's when they lost me. I understand a technology journal will naturally have some overlap with politics, but they don't even try to hide the agenda anymore.
archive of the deleted article https://mttaggart.neocities.org/ars-whoopsie
I use AI in my work too but this would be akin to vibe coding, no test coverage, straight to prod. AI aside, this is just unprofessional.
Already being discussed here: https://news.ycombinator.com/item?id=47009949
I am finding less value in reading Ars:
* They are often late in reporting a story. This is fine for what Ars is, but that means by the time they publish a story, I have likely read the reporting and analysis elsewhere already, and whatever Ars has to say is stale
* There seem to be fewer long stories/deep investigations recently when competitors are doing more (e.g. Verge's brilliant reporting on Supernatural recently)
* The comment section is absolutely abysmal and rarely provides any value or insight. It maybe one of the worst echo chambers that is not 4chan or a subreddit, full of (one-sided) rants and whining without anything constructive that is often off topic. I already know what people will be saying there without opening the comment section, and I'm almost always correct. If the story has the word "Meta" anywhere in the article, you can be sure someone will say "Meta bad" in the comment, even if Meta is not doing anything negative or even controversial in the story. Disagree? Your comment will be downvoted to -100.
These days I just glance over the title, and if there is anything I haven't read about from elsewhere, I'll read the article and be done with it. And I click their articles much less frequently these days. I wonder if I should stop reading it completely.
I'm honestly shocked by this having been an Ars reader for over ten years. I miss the days when they would publish super in-depth articles on computing. Since the Conde Nast acquisition I basically only go to ars for Beth Mole's coverage which is still top notch. Other than that I've found that the Verge fulfills the need that I used to get from Ars. I also support the Verge as a paid subscriber and cannot recommend them enough.
Ars still has some of the best comment sections out there. It's refreshing to hang with intelligent, funny people - just like the good old days on the Web.
This is what happens when you optimize for publishing speed over accuracy. But the deeper issue is attribution — if Ars fabricated quotes, how many other outlets are doing the same with less prominent maintainers who won't notice? Open source maintainers already deal with enough burnout without having words put in their mouths.
There are some interesting dynamics going on at Ars. I get the sense that the first author on the pulled article, Benj Edwards, is trying to walk a very fine line between unbiased reporting, personal biases, and pandering to the biases of the audience -- potentially for engagement. I get the sense this represents a lot of the views of the entire publication on AI. In fact, there are some data points in this very thread.
For one, the commenters on Ars largely, extremely vocally anti-AI as pointed out by this comment: https://news.ycombinator.com/item?id=47015359 -- I'd say they're even more anti-AI than most HN threads.
So every time he says anything remotely positive about AI, the comments light up. In fact there's a comment in this very thread accusing him of being too pro-AI! https://news.ycombinator.com/item?id=47013747 But go look at his work: anything positive about AI is always couched in much longer refrains about the risks of AI.
As an example, there has been a concrete instance of pandering where he posted a somewhat balanced article about AI-assisted coding, and the very first comment went like, "Hey did you forget about your own report about how the METR study found AI actually slowed developers down?" and he immediately updated the article to mention that study. (That study's come up a bunch of times but somehow, he's never mentioned the multiple other studies that show a much more positive impact from AI.)
So this fiasco, which has to be AI hallucinations somehow, in that environment is extremely weird.
As a total aside, in the most hilarious form of irony, their interview about Enshittification with Cory Doctorow himself crashed the browser on my car and my iPad multiple times because of ads. I kid you not. I ranted about it on LinkedIn: https://www.linkedin.com/posts/kunalkandekar_enshittificatio...
This is embarrassing :/
Who still reads Ars Technica? Has been primarily slop and payola for some time now.
Nothing new, just got caught this time.
Some of the quotations come from an edited github comment[0]. But some of them do seem to be hallucinations.
[0] https://github.com/matplotlib/matplotlib/pull/31132#issuecom...
et tu ars technica?
Finally time to get rid of them and delete the RSS feed. It was more nostalgia anyways the last 7 years showed a steady decline.
I would like to give a small defense of Benj Edwards. While his coverage on Ars definitely has a positive spin on AI, his comments on social media are much less fawning. Ars is a tech-forward publication, and it is owned by a major corporation. Major corporations have declared LLMs to be the best thing since breathable air, and anyone who pushes back on this view is explicitly threatened with economic destitution via the euphemism "left behind." There's not a lot of paying journalism jobs out there, and people gotta eat, hence the perhaps more positive spin on AI from this author than is justified.
All that said, this article may get me to cancel the Ars subscription that I started in 2010. I've always thought Ars was one of the better tech news publications out there, often publishing critical & informative pieces. They make mistakes, no one is perfect, but this article goes beyond bad journalism into actively creating new misinformation and publishing it as fact on a major website. This is actively harmful behavior and I will not pay for it.
Taking it down is the absolute bare minimum, but if they want me to continue to support them, they need to publish a full explanation of what happened. Who used the tool to generate the false quotes? Was it Benj, Kyle, or some unnamed editor? Why didn't that person verify the information coming out of the tool that is famous for generating false information? How are they going to verify information coming out of the tool in the future? Which previous articles used the tool, and what is their plan to retroactively verify those articles?
I don't really expect them to have any accountability here. Admitting AI is imperfect would result in being "left behind," after all. So I'll probably be canceling my subscription at my next renewal. But maybe they'll surprise me and own up to their responsibility here.
This is also a perfect demonstration of how these AI tools are not ready for prime time, despite what the boosters say. Think about how hard it is for developers to get good quality code out of these things, and we have objective ways to measure correctness. Now imagine how incredibly low quality the journalism we will get from these tools is. In journalism correctness is much less black-and-white and much harder to verify. LLMs are a wildly inappropriate tool for journalists to be using.
comment on the comments
anybody else notice that the meatverse looks like it's full of groggy humans bumbling around getting there bearings after way too much of the wrong stuff consumed at a party wears off that realy wasn't fun at all. A sort of technological hybernation that has gone on way too long.
Man this is disappointing and really disturbing.
[flagged]
[flagged]
Take a look at the number of people who think vibe coding without reading the output is fine if it passes the tests who but are absolutely aghast at this.
I have very strong, probably controversial, feeling on arstechnica, but I believe the acquisition from Condé Nast has been a tragedy.
Ars writers used to be actual experts, sometimes even phd level, on technical fields. And they used to write fantastical and very informative articles. Who is left now?
There are still a couple of good writers from the old guard and the occasional good new one, but the website is flooded with "tech journalist", claiming to be "android or Apple product experts" or stuff like that, publishing articles that are 90% press material from some company and most of the times seems to have very little technical knowledge.
They also started writing product reviews that I would not be surprised to find out being sponsored, given their content.
Also what's the business with those weirdly formatted articles from wired?
Still a very good website but the quality is diving.