It's becoming much harder to determine on a daily basis what content is original, thought-out by a person, and trustworthy. Ironically, verifiably-old content is easier to trust now. Examples from recent personal experience:
1) Some time ago I was searching for growing information about a specific and uncommonly-grown plant, and was led to a top-ranked website with long pages containing everything about it, including other plants. Surprised at how prolific the writing was, I spent more than an hour on the website, taking notes, etc. Every few paragraphs it would include an amazon affiliate link to something topical, which I thought was fair. Until I realized that the links near the bottom of the page were looking more random. Then it hit me, the website is all AI-generated, and the affiliate links themselves are also AI-chosen. And everything new I "learned" from that site was now useless because I had no way to know what was grounded in actual agricultural experience and what was hallucinated.
2) Recently I did a youtube search for a book I had just finished reading, looking for some reviews. Came across a channel that was reading the book as new audio (i.e. not the original published audiobook). I thought it was a fan making it. The voice was beautiful, soothing, and natural with all kinds of relevant emotions correctly included. I started listening to the book again, until I noticed a consistent error in word ordering being made every few lines. Then it hit me! The channel even included one upload with a video recording of a seemingly-real person reading with that voice. Both the audio and video are AI-generated, but very hard to tell.
3) Next to those videos, YT recommended many strange/new channels. One had the photo and the exact voice of a famous (and now very old) physicist, with tens of clickbaity titles about controversial topics in the domain. The only tell was that the voice was too vigorous and consistently energetic, while if you've listened to that physicist before, you know his cadence is slower. At first I thought maybe the channel is reading one of his books; no, the content itself was AI-generated, maybe based on his books. There was a lot of engagement, with many comments like "mind blown" and "learned so much today".
Both #1 and #3 are harmful, because you think you're learning from a reliable source but you end up learning hallucinated nothings. #2 I didn't mind much, still enjoyed the new voice, and even preferred it over my original audible version.
I feel for you. I was looking for some wildlife events on Youtube, only to find that all of them were AI generated, trying to get views. I can only find content somehow reliable if I put filter for content before of AI era.
Something I've recently started seeing, maybe even an emerging #4 is AI generated translations. You could have someone very intelligent, making well written subject matter expertise. Or just someone who has valid thoughts they wish to express to the world in a language more of a common tongue than their own.
Or on the other end you could have someone who wrote a sentance or two in their language and had some combination of AI generation and translation algorithm bloat it out.
In both cases you will get something that can look right and well thought out or explained, but probably will have at least some of the AI slop signs present. I don't know what the solution is for this type given claims Google Translate has started to do this kind of translation for people. An AI translation is probably just as prone to hallucinations as any other AI, but it probably will look more natural to readers than a direct translation.
You're making the classic mistake of looking for a trustworthy information source and then trusting it, instead of focusing on whether the information itself is trustworthy regardless of source. It's literally the same as my grandma saying "they said so on TV, therefore it must be true" while completely dismissing anything I've read on the internet because reasons.
If you develop the skill of judging information by its merit rather than source, you won't mind AI-generated content as long as it's helpful.
I talk to LLMs a lot. It's fucking great. Do I take everything they say at face value? No. But neither do I take at face value things that biological intelligence outputs.
Humans are also unreliable, we are competing for scarce attention, platforms decide what gets visibility and we cater to their algorithms. You could say humans are prompted by feed ranking AI - what and how to publish.