Heavy Gemini user here, another observation: Gemini cites lots of "AI generated" videos as its primary source, which creates a closed loop and has the potential to debase shared reality.
A few days ago, I asked it some questions on Russia's industrial base and military hardware manufacturing capability, and it wrote a very convincing response, except the video embedded at the end of the response was an AI generated one. It might have had actual facts, but overall, my trust in Gemini's response to my query went DOWN after I noticed the AI generated video attached as the source.
Countering debasement of shared reality and NOT using AI generated videos as sources should be a HUGE priority for Google.
YouTube channels with AI generated videos have exploded in sheer quantity, and I think majority of the new channels and videos uploaded to YouTube might actually be AI; "Dead internet theory," et al.
All of that and you're still a heavy user? Why would google change how Gemini works if you keep using it despite those issues?
> Gemini cites lots of "AI generated" videos as its primary source
Almost every time for me... an AI generated video, with AI voiceover, AI generated images, always with < 300 views
>Countering debasement of shared reality and NOT using AI generated videos as sources should be a HUGE priority for Google.
This itself seems pretty damning of these AI systems from a narrative point of view, if we take it at face value.
You can't trust AI to generate things that are sufficiently grounded in facts that you can't even use it as a reference point. Why should end users believe the narrative that these things are as capable as they're being told they are, by extension?
Try Kagi’s Research agent if you get a chance. It seems to have been given the instruction to tunnel through to primary sources, something you can see it do on reasoning iterations, often in ways that force a modification of its working hypothesis.
If you are still looking for material, I'd like to recommend you Perun and the last video he made on that topic: https://youtu.be/w9HTJ5gncaY
Since he is a heavy "citer" you could also see the video description for more sources.
Google will mouth words, but their bottom line runs the show. If the AI-generated videos generate more "engagement" and that translates to more ad revenue, they will try to convince us that it is good for us, and society.
Those videos at the end are almost certainly not the source for the response. They are just a "search for related content on youtube to fish for views"
> A few days ago, I asked it some questions on Russia's industrial base and military hardware manufacturing capability
This is one of the last things I would expect to get any reasonable response about from pretty much anyone in 2026, especially LLMs. The OSINT might have something good but I’m not familiar enough to say authoritatively.
> and has the potential to debase shared reality.
If only.
What it actually has is the potential to debase the value of "AI." People will just eventually figure out that these tools are garbage and stop relying on them.
I consider that a positive outcome.
I think we hit peak AI improvement velocity sometime mid last year. The reality is all progress was made using a huge backlog of public data. There will never be 20+ years of authentic data dumped on the web again.
I've hoped against but suspected that as time goes on LLMs will become increasingly poisoned by the the well of the closed loop. I don't think most companies can resist the allure of more free data as bitter as it may taste.
Gemini has been co opted as a way to boost youtube views. It refuses to stop showing you videos no matter what you do.
Ourobouros - The mythical snake that eats its own tail (and ingests its own excrement)
Users a can turn off grounded search in the Gemini API. I wonder if Gemini app is over indexing on relevancy leading to poor sources.
Google is in a much better spot to filter out all AI generated content than others.
It's not like chatgpt is not going to cite AI videos/articles.
I came across a YouTube video that was recommended to me this weekend, talking about how Canada is responding to these new tariffs in January 2026, talking about what Prime Minister Justin Trudeau was doing, etc. etc.
Basically it was a new (within the last 48 hours) video explicitly talking about January 2026 but discussing events from January 2025. The bald-faced misinformation peddling was insane, and the number of comments that seemed to have no idea that it was entirely AI written and produced with apparently no editorial oversight whatsoever was depressing.
unfortunately i think a lot of AI models put more weighting on videos as they were harder to fake than a random article on the internet. of course that is not the case anymore with all the AI slop videos being churned out
There was a recent hn post about how chatgpt mentions Grokpedia so many times.
Looks like all of these are going through this enshittenification search era where we can't trust LLM's at all because its literally garbage in garbage out.
Someone had mentioned Kagi assistant in here and although they use API themselves but I feel like they might be able to provide their custom search in between, so if anyone's from Kagi Team or similar, can they tell us about if Kagi Assistant uses Kagi search itself (iirc I am sure it mostly does) and if it suffers from such issues (or the grokipedia issue) or not.
So how does one avoid the mistake again? When this happens, it's worse than finding out a source is less reliable than expected:
I was living in an alternate, false reality, in a sense, believing the source for X time. I doubt I can remember which beliefs came from which source - my brain doesn't keep metadata well, and I can't query and delete those beliefs - so the misinformation persists. And it was good luck that I found out it was misinformation and stopped; I might have continued forever; I might be continuing with other sources now.
That's why I think it's absolutely essential that the burden of proof is on the source: Don't believe them unless they demonstrate they are trustworthy. They are guilty until proven innocent. That's how science and the law work, for example. That's the only innoculation against misinformation, imho.
> YouTube channels with AI generated videos have exploded in sheer quantity, and I think majority of the new channels and videos uploaded to YouTube might actually be AI; "Dead internet theory," et al.
Yeah. This has really become a problem.
Not for all videos; music videos are kind of fine. I don't listen to music generated by AI but good music should be good music.
The rest has unfortunately really gotten worse. Google is ruining youtube here. Many videos now contain real videos, and AI generated videos, e. g. animal videos. With some this is obvious; other videos are hard to expose as AI. I changed my own policy - I consider anyone using AI and not declaring this properly, a cheater I don't want to ever again interact with (on youtube). Now I need to find a no-AI videos extension.