logoalt Hacker News

Google AI Overviews cite YouTube more than any medical site for health queries

392 pointsby bookofjoeyesterday at 2:27 PM204 commentsview on HN

Comments

abixbyesterday at 3:27 PM

Heavy Gemini user here, another observation: Gemini cites lots of "AI generated" videos as its primary source, which creates a closed loop and has the potential to debase shared reality.

A few days ago, I asked it some questions on Russia's industrial base and military hardware manufacturing capability, and it wrote a very convincing response, except the video embedded at the end of the response was an AI generated one. It might have had actual facts, but overall, my trust in Gemini's response to my query went DOWN after I noticed the AI generated video attached as the source.

Countering debasement of shared reality and NOT using AI generated videos as sources should be a HUGE priority for Google.

YouTube channels with AI generated videos have exploded in sheer quantity, and I think majority of the new channels and videos uploaded to YouTube might actually be AI; "Dead internet theory," et al.

show 18 replies
gumboshoesyesterday at 3:41 PM

I have permanent prompts in Gemini settings to tell it to never include videos in its answers. Never ever for any reason. Yet of course it always does. Even if I trusted any of the video authors or material - and I don't know them so how can I trust them? - I still don't watch a video that could be text I could read in one-tenth of the time. Text is superior to video 99% of the time in my experience.

show 4 replies
jonas21yesterday at 4:39 PM

If you click through to the study that the Guardian based this article on [1], it looks like it was done by an SEO firm, by a Content Marketing Manager. Kind of ironic, given that it's about the quality of cited sources.

[1] https://seranking.com/blog/health-ai-overviews-youtube-vs-me...

danpalmeryesterday at 10:45 PM

> YouTube made up 4.43% of all AI Overview citations. No hospital network, government health portal, medical association or academic institution came close to that number, they said.

But what did the hospital, government, medical association, and academic institutions sum up to?

The article goes on to given the 2nd to 5th positions in the list. 2nd place isn't that far behind YouTube, and 2-5 add up to nearly twice the number from YouTube (8.26% > 4.43%). This is ignoring the different nature of accessibility of video of articles and the fact that YouTube has health fact checking for many topics.

I love The Guardian, but this is bad reporting about a bad study. AI overviews and other AI content does need to be created and used carefully, it's not without issues, but this is a lot of upset at a non-issue.

xnxyesterday at 3:12 PM

Sounds very misleading. Web pages come from many sources, but most video is hosted on YouTube. Those YouTube videos may still be from Mayo clinic. It's like saying most medical information comes from Apache, Nginx, or IIS.

show 5 replies
jacquesmyesterday at 10:51 PM

Of course they do: Youtube makes Google more money. Video is a crap medium for most of the results to my queries and yet it is usually by far the biggest chunk of the results. Then you get the (very often comically wrong) AI results and then finally some web page links. The really odd thing is that Google has a 'video' search facility, if I want a video as the result I would use that instead or I would use the 'video' keyword.

coulixyesterday at 9:21 PM

The YouTube citation thing feels like a quality regression. For medical stuff especially, I’ve found tools that anchor on papers (not videos) to be way more usable like incitefulmed.com is one example I’ve tried recently.

jppopeyesterday at 7:51 PM

Naïve question here... personally, I've never found Webmd, cdc, or Mayo clinic to be that good at fulfilling actual medical questions. why is it a problem to cite YouTube videos with a lot of views? Wouldn't that be better?

show 4 replies
seanalltogetheryesterday at 4:51 PM

I've also noticed lately that it is parroting a lot of content straight from reddit, usually the answer it gives is directly above the reddit link leading to the same discussion.

jdlygayesterday at 3:13 PM

It's tough convincing people that Google AI overviews are often very wrong. People think that if it's displayed so prominently on Google, it must be factually accurate right?

"AI responses may include mistakes. Learn more"

It's not mistakes, half the time it's completely wrong and total bullshit information. Even comparing it to other AI, if you put the same question into GPT 5.2 or Gemini, you get much more accurate answers.

show 4 replies
htx80nerdyesterday at 5:49 PM

I ask Gemini health questions non stop and never see it using YouTube as a source. Quickly looking over some recent chats :

- chat 1 : 2 sources are NIH. the other isnt youtube.

- chat 2 : PNAS, PUBMED, Cochrane, Frontiers, and PUBMED again several more times.

- chat 3 : 4 random web sites ive never heard of, no youtube

- chat 4 : a few random web sites and NIH, no youtube

nicceyesterday at 5:37 PM

I would guess that they are doing this on purpose, because they control YouTube's servers and can cache content in that way. Less latency. And once people figure it out, it pushes more information into Google's control, as AI is preferring it, and people want their content used as reference.

drsalttoday at 2:22 AM

the real promise of large language models is that before people were looking too closely, datasets had been procured in questionable ways. so then users can have access to medical data based on doctor's emails and word documents on their pc. which would have a lot of value. no it has become a glorified search engine.

jesse__yesterday at 7:12 PM

With the general lack of scientific rigour, accountability, and totally borked incentive structure in academia, I'm really not sure if I'd trust whitepapers any more than I'd trust YouTube videos at this point.

jgalt212today at 12:49 PM

For straight up search Google is better, but for AI search I prefer Bing.

lambdaonetoday at 10:45 AM

Randall Munroe coined the helpful word 'citogenesis' to describe the combination of uncritical citation and feedback loops. Once the nonsense is out there, it's hard to ever claw it back.

https://xkcd.com/978/

not_good_coderyesterday at 3:20 PM

The authoritative sources of medical information is debatable in general. Chatting with initial results to ask for a breakdown of sources with classified recommendations is a logical 2nd step for context.

ggnore7452yesterday at 8:41 PM

imo, for health related stuff. or most of the general knowledge doesn't require latest info after 2023. the internal knowledge of LLM is so much better than the web search augmented one.

dbacaryesterday at 5:53 PM

What about the answers (regardless of the source)? Are they right or not?

winddudeyesterday at 6:51 PM

google search has been on a down slope for awhile, it's all been because they focused on maximizing profits over UX and quality.

mikkupikkuyesterday at 3:25 PM

Don't all real/respectable medical websites basically just say "Go talk to a real doctor, dummy."?

...and then there's WebMD, "oh you've had a cough since yesterday? It's probably terminal lung cancer."

show 1 reply
laborcontractyesterday at 3:24 PM

Google AI overviews are often bad, yes, but why is youtube as a source necessarily a bad thing? Are these researchers doctors? A close relative is a practicing surgeon and a professor in his field. He watches youtube videos of surgeries practically every day. Doctors from every field well understand that YT is a great way to share their work and discuss w/ others.

Before we get too worked up about the results, just look at the source. It's a SERP ranking aggregator (not linking to them to give them free marketing) that's analyzing only the domains, not the credibility of the content itself.

This report is a nothingburger.

show 2 replies
ajrossyesterday at 6:54 PM

Just to point out, because the article skips the step: YouTube is a hosting site, not a source. Saying that something "cites YouTube" sounds bad, but it depends on what the link is. To be blunt: if Gemini is answering a question about Cancer with a link to a Mayo Clinic video, that's a good thing, a good cite, and what we want it to do.

josefritzishereyesterday at 4:51 PM

Google AI cannot be trusted for medical adivice. It has killed before and it will kill again.

show 1 reply
Pxtlyesterday at 4:20 PM

What's surprising is how poor Google Search's transcript access is to Youtube videos. Like, I'll Google search for statements that I know I heard on Youtube but they just don't appear as results even though the video has automated transcription on it.

I'd assumed they simply didn't feed it properly to Google Search... but they did for Gemini? Maybe just the Search transcripts are heavily downranked or something.

qq66yesterday at 9:33 PM

I've seen so many outright falsehoods in Google AI overviews that I've stopped reading them. They're either not willing to incur the cost or latency it would take to make them useful.

citizenpaulyesterday at 10:52 PM

Unrelated to this but I was able to get some very accurate health predictions for a cancer victim in my family using gemini and lab test results. I would actually say that other than one Doctor Gemini was more straightforward and honest about how and more importantly WHEN things would progress. Nearly to the day on every point over 6 months.

Pretty much every doctor would only say vague things like everyone is different all cases are different.

I did find this surprising considering I am critical of AI in general. However I think less the AI is good than the doctors simply don't like giving hopeless information. An entirely different problem. Either way the AI was incredibly useful to me for a literal life/death subject I have almost no knowledge about.

bjourneyesterday at 4:45 PM

Basic problem with Google's AI is that it never says "you can't" or "I don't know". So many times it comes up with plausible-sounding incorrect BS to "how to" questions. E.g., "in a facebook group how do you whitelist posts from certain users?" The answer is "you can't", but AI won't tell you.

ChrisArchitectyesterday at 3:42 PM

Related:

Google AI Overviews put people at risk of harm with misleading health advice

https://news.ycombinator.com/item?id=46471527

RobotToasteryesterday at 9:14 PM

Probably because the majority of medical sites are paywalled.

heliumterayesterday at 4:00 PM

Ohhh, I would make one wild guess: in the upcoming llm world, the highest bidder will have a higher chance of appearing as a citation or suggestion! Welcome to gas town, so much productivity ahead!! For you and the high bidding players interested in taking advantage of you

show 1 reply
quantumwokeyesterday at 3:32 PM

It's crazy to me that somewhere along the way we lost physical media as a reference point. Journals and YouTube can be good sources of information, but unless heavily confined to high quality information current AI is not able to judge citation quality to come up with good recommendations. The synthesis of real world medical experience is often collated in medical textbooks and yet AI doesn't cite them nearly as much as it should.

show 1 reply
jeffbeeyesterday at 2:54 PM

The assumption appears to be that the linked videos are less informative than "netdoktor" but that point is left unproven.

lifetimerubyistyesterday at 5:11 PM

It’s slop all the way down. Garbage In Garbage Out.

modzuyesterday at 4:29 PM

I'm getting fucking sick of it. this bubble can go ahead and burst

paulddraperyesterday at 3:28 PM

Same energy as “lol you really used Wikipedia you dumba—“

jmyeetyesterday at 4:26 PM

How long will it be before somebody seeks to change AI answers by simply botting Youtube and/or Reddit?

Example: it is the official position of the Turkish government that the Armenian genocide [1] didn't happen.. It did. Yet for years they seemingly have spent resources to game Google rankings. Here's an article from 2015 [2]. I personally reported such government propaganda results in Google in 2024 and 2025.

Current LLMs really seem to come down to regurgitating Reddit, Wikipedia and, I guess for Germini, Youtube. How difficult would it be to create enough content to change an LLM's answers? I honestly don't know but I suspect for certain more niche topics this is going to be easier than people think.

And this is totally separate from the threat of the AI's owners deciding on what biases an AI should have. A notable example being Grok's sudden interest in promoting the myth of a "white genocide" in South AFrica [3].

Antivaxxer conspiracy theories have done well on Youtube (eg [4]). If Gemini weights heavily towards Youtube (as claimed) how do you defend against this sort of content resulting in bogus medical results and advice?

[1]: https://en.wikipedia.org/wiki/Armenian_genocide

[2]: https://www.vice.com/en/article/how-google-searches-are-prom...

[3]: https://www.theguardian.com/technology/2025/may/14/elon-musk...

[4]: https://misinforeview.hks.harvard.edu/article/where-conspira...

causalscienceyesterday at 4:32 PM

> Google AI Overviews cite YouTube more than any medical site for health queries

Whaaaa? No way /s

Like, do you people not understand the business model?

shevy-javayesterday at 7:17 PM

Conflict of interest.

I believe we need to do something. I see the big corporations slowly turn more and more of the world wide web into their private variant.

show 1 reply
ThinkingGuyyesterday at 4:05 PM

Google AI (owned by Meta) favoring YouTube (also owned by Meta) should be unsurprising.

show 3 replies
delichonyesterday at 3:03 PM

I imagine that it is rare for companies to not preferentially reference content on their own sites. Does anyone know of one? The opposite would be newsworthy. If you have an expectation that Google is somehow neutral with respect to search results, I wonder how you came by it.

show 1 reply