How long will it be before somebody seeks to change AI answers by simply botting Youtube and/or Reddit?
Example: it is the official position of the Turkish government that the Armenian genocide [1] didn't happen.. It did. Yet for years they seemingly have spent resources to game Google rankings. Here's an article from 2015 [2]. I personally reported such government propaganda results in Google in 2024 and 2025.
Current LLMs really seem to come down to regurgitating Reddit, Wikipedia and, I guess for Germini, Youtube. How difficult would it be to create enough content to change an LLM's answers? I honestly don't know but I suspect for certain more niche topics this is going to be easier than people think.
And this is totally separate from the threat of the AI's owners deciding on what biases an AI should have. A notable example being Grok's sudden interest in promoting the myth of a "white genocide" in South AFrica [3].
Antivaxxer conspiracy theories have done well on Youtube (eg [4]). If Gemini weights heavily towards Youtube (as claimed) how do you defend against this sort of content resulting in bogus medical results and advice?
[1]: https://en.wikipedia.org/wiki/Armenian_genocide
[2]: https://www.vice.com/en/article/how-google-searches-are-prom...
[3]: https://www.theguardian.com/technology/2025/may/14/elon-musk...
[4]: https://misinforeview.hks.harvard.edu/article/where-conspira...