Theoretically yes. It is entirely possible to poison the training data for a supply chain attack against vibe coders. The trick would be to make it extremely specific for a high value target so it is not picked up by a wide range of people. You could also target a specific open source project that is used by another widely used product.
However there is so many factors involved beyond your control that it would not be a viable option compared to other possible security attacks.
But propaganda or non ethical marketing - why not? (That is bias toward pointing to certain provider(s)).
or more obvious like TikTok.
Meaning Tiktok in the us is complete garbage for kids, almost like a virus. Whereas in China it's more educational.
Would be interesting to hook up a much simpler LLM as fact checker to see when errors are introduced.
If I had to place a hidden target it'd probably be around RNGs or publicly exposed services..
I believe this is possible but unlikely. I don't think a Chinese company trying to break down the US's stronghold in this field would do this short term. I think it is in their best interest to be cheaper, better, easier, and more trust worthy until competition looks silly.
It's like suggesting BYD has a high likelihood of making their cars into weapons or something. It's not in the company or their countries interest to do that.
Sure it could happen but I bet it would only happen in a targeted way. Why risk all credibility right now and engage in cyber warfare?