There’s no malice if there was no intention of falsifying quotes. Using a flawed tool doesn’t count as intention.
I think that is the crucial question. Often we lump together malice with "reckless disregard". The intention to cause harm is very close to the intention to do something that you know or should know is likely to cause harm, and we often treat them the same because there is no real way to prove intent, so otherwise everyone could just say they "meant no harm" and just didn't realize how harmful their actions could be.
I think that a journalist using an AI tool to write an article treads perilously close to that kind of recklessness. It is like a carpenter building a staircase using some kind of weak glue.
Replace parent-poster's "malice" with "malfeasance", and it works well-enough.
I may not intend to burn someone's house down by doing horribly reckless things with fireworks... but after it happens, surely I would still bear both some fault and some responsibility.
Outsourcing writing to a bot without attribution may not be malicious, but it does strain integrity.
The issues with such tools are highly documented though. If you’re going to use a tool with known issues you’d better do your best to cover for them.
The tool when working as intended makes up quotes. Passing that off as journalism is either malicious or unacceptably incompetent.
> Using a flawed tool doesn’t count as intention.
"Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here."
They aren't allowed to use the tool, so there was clearly intention.
They're expected by policy to not use AI. Lying about using AI is also malice.
Outsourcing your job as a journalist to a chatbot that you know for a fact falsifies quotes (and everything else it generates) is absolutely intentional.