> You cannot permit your employees to use LLMs in this manner and then tell them it's entirely their fault when it makes mistakes, because you gave them permission to use something that will make mistakes 100% without fail.
Yes you can. The same way Wikipedia (or, way back when, a paper encyclopedia) can be used for research but you have to verify everything with other sources because it is known there are errors and deficiencies in such sources. Or using outsourced dev resource (meat-based outsourced devs can be as faulty as an LLM, some would argue sometimes more so) without reviewing their code before implemeting it in production.
Should they also ban them from talking to people as sources of information, because people can be misinformed or actively lie, rather than instead insisting that information found from such sources be sense-checked before use in an article?
Personally I barely touch LLMs at all (at some point this is going to wind up DayJob where they think the tech will make me more efficient…) but if someone is properly using them as a different form of search engine, or to pick out related keywords/phrases that are associated with what they are looking for but they might not have thought of themselves, that would be valid IMO. Using them in these ways is very different from doing a direct copy+paste of the LLM output and calling it a day. There is a difference between using a tool to help with your task and using a tool to be lazy.
> it's company policy not to burn everything to the ground!
The flamethrower example is silly hyperbole IMO, and a bad example anyway because everywhere where potentially dangerous equipment is actually made available for someone's job you will find policies exactly like this. Military use: “we gave them flamethrowers for X and specifically trained them not to deploy them near civilians, the relevant people have been court-martialled and duly punished for the burnign down of that school”. Civilian use: “the use of flamethrowers to initiate controlled land-clearance burns must be properly signed-off before work commences, and the work should only be signed of to be performed by those who have been through the full operation and safety training programs or without an environmental risk assessment”.
> Yes you can. The same way Wikipedia (or, way back when, a paper encyclopedia) can be used for research but you have to verify everything with other sources because it is known there are errors and deficiencies in such sources.
I think that if Wikipedia had no recommendations on good sources for their own articles and did not ever ban sources, companies would not be so sanguine about letting people use Wikipedia. There's an entire internal process associated with evaluating sources, and the expectation when using Wikipedia is that nothing written in an article is going to be sourced from the Daily Mail or Conservapedia, as an example. Also, I do think that there are companies that do have policies against talking to known liars. Given the Wikipedia bans sources and news agencies ban human sources once they've been shown to be unreliable, I don't think it's insane to then have such companies or agencies say that AI shouldn't be used because it's been shown to be unreliable. Obviously there's a balancing act of utility versus accuracy, and Ars has (probably incorrectly) decided that the utility of AI outweighs its inaccuracies.
What is frustrating is that AI cannot have a higher accuracy than the median reporter, given a little more time. AI is trained on all digitizable text, including falsehoods and inaccuracies by laypeople. Humans can look up digitizable text using search engines, too. An AI can't follow up on leads or ask anyone questions. There's no world in which synthesizing available data from digitized sources alone ends up with more accurate data than a human with a search engine and the ability to make a phone call. So allowing LLM use at all is a direct admission that seeking out the "truth" is not an important goal because it could never actually improve accuracy and could only worsen it through hallucinated, probable reporting. It's one thing when companies say that they're committed to truth and then secretly their most important overriding concern is their bottom line - it's quite another thing when a company directly says that the bottom line is their most important concern. Imagine the emperor walking through the parade, nude, saying "So what if I am nude? What are you going to do about it?"
> The same way Wikipedia can be used for research
Before LLMs, Wikipedia was the greatest source of disinformation in human history. No journalist should ever have been using it for research. At best it's a fun project for satisfying people's idle curiosity where the truth of what they read doesn't really matter, but if your job is to report factual information, reading Wikipedia is doing a disservice to yourself and your readers. Just like people don't properly verify the BS LLMs fabricate, very few people thoroughly read the citations on Wikipedia, which often involves purchasing books and getting access to research papers. If they did read citations, they would realise that Wikipedia citations are all too frequently unsupported by the actual material they're citing, or in some cases, the cited material establishes the exact opposite. This is to say nothing of cherry-picking sources, of course.
> Should they also ban them from talking to people as sources of information, because people can be misinformed or actively lie, rather than instead insisting that information found from such sources be sense-checked before use in an article?
Statements made by people are attributed to those people to account for this. Rather than saying "Company X's product is the safest product ever made", a journalist says "Company X's CEO claims their product is the safest ever made". People do not do this with LLMs or Wikipedia, rather than attributing it to an understood-to-be-unreliable source they just present it as a factual statement. Also, if the journalist has good reason to believe the quoted statement is false, it is in fact journalistic malpractice to cite the quote wholesale without caveats informing the reader of the evidence that the quoted person is trying to mislead them.
> because everywhere where potentially dangerous equipment is actually made available for someone's job you will find policies exactly like this
Which maybe makes sense when the flamethrowers are a necessary part of the job. Flamethrowers are not necessary for journalism, full stop, so the fault rests with the organization introducing a dangerous tool into the work environment unnecessarily.