I use YouTube’s AI to screen podcasts, but I’ve noticed it has been glazing over large sections involving politically sensitive or outlandish topics. Although the AI could verify these details when pressed, its initial failure to include them constitutes a form of editorializing. While I understand the policy motivations behind this, such omissions are unacceptable in a tool intended for objective summarization.
I’m pretty sure YouTube’s built-in AI summary is also biased towards not “spoiling” the video.
Like if the title is a clickbait “this one simple trick to..” the ai summary right below will summarize all the things accomplished with the “trick” but they still want you to actully click on the video (and watch any ads) to find out more information. They won’t reveal the trick in the summary.
So annoying because it could be a useful time saving feature. But what actually saves time is if I click through and just skim the transcript myself.
The ai features are also limited by context length on extremely long form content. I tried using the “ask a question about this video” and it could answer questions about the first 2 hours in a very long podcast but not the last third hour. (It was also pretty obviously using only the transcript, and couldn’t reference on-screen content)
This is a delicate balance to achieve. I hate how cowardly most LLMs are about controversial topics but if you aren't careful you have grok saying insane things.
I’ve used this tool for yt ai summaries https://gocontentflow.com/submit