> an LLM written article by default has 0 value since every single line could be true or it could be a convincingly crafted lie, every line has to be fact checked
The exact same thing is true of Human speech. You have no idea if anything a human says is true until you fact check it. But you don't fact check everything every person says, do you?
So what do you do instead? You use heuristics. Simple - and quite flawed - subconscious rules to stop worrying about things. You find a person you like, and you classify them "trustworthy", and believe almost all of what they say, not considering if any of it might be false. But of course, humans are fallible, and many of them receive "poisoned" input, and even hallucinate (making up information). They then spread that false information around. Yes, even the people you trust.
And when you're faced with something untrue, said by someone you trust, you rationalize it. "Oh, they just made a mistake." And you completely ignore that the person you trust told you a falsehood. Life is hard enough without having to question if everything we hear is false. So we just accept falsehoods from some people, and not others.
LLMs are likely more factual and knowledgeable today than humans are, thanks to their constant improvements via reinforcement. They're going to keep getting better too. But they'll never be perfect. Rather than rejecting anything they produce, my suggestion would be to do what you do with humans: trust them a little, verify big things, let the little things go, accept that there will be errors, and move on with life.