logoalt Hacker News

sudo_cowsaytoday at 7:02 AM10 repliesview on HN

I sometimes wonder if there are any security risks with using Chinese LLMs. Is there?


Replies

rapindtoday at 1:56 PM

All China (or anyone) has to do is deliver a close to equal product at a much cheaper price and make it scaleable / usable... which is what they're doing. It doesn't have to be malicious at all. Just a good product at a good price. The US is basically in a recession that's hiding behind insane AI investments.

dalemhurleytoday at 7:21 AM

Theoretically yes. It is entirely possible to poison the training data for a supply chain attack against vibe coders. The trick would be to make it extremely specific for a high value target so it is not picked up by a wide range of people. You could also target a specific open source project that is used by another widely used product.

However there is so many factors involved beyond your control that it would not be a viable option compared to other possible security attacks.

show 4 replies
oliwarnertoday at 7:48 AM

If there is, couldn't they exist in any model?

I don't mean that flippantly. These things are dumped in the wild, used on common (largely) open source execution chains. If you find a software exploit, it's going to affect your population too.

Wet exploits are a bit harder to track. I'd assume there are plenty of biases based on training material but who knows if these models have a MKUltra training programme integrated into them?

rhubarbtreetoday at 7:52 AM

Backdooring software at scale.

Spearphishing.

Building reliance and exploiting it, through state subsidies, dumping, and market manipulation.

Handicapping provision to the west for competitive advantage.

show 3 replies
cassianolealtoday at 7:48 AM

What about LLMs from other origins? What makes them less risky?

surgical_firetoday at 11:47 AM

I sometimes wonder is there are any security risks with using LLMs from the US.

eucyclostoday at 7:56 AM

From my experience, kinda the opposite? It's like Chinese software is... Harder to weaponize or hurt yourself on. Deepseek is definitely censored, but I've never caught it being dishonest in a sneaky way.

Hamukotoday at 7:16 AM

There must be. The executives at my company wouldn't have banned them all for no reason after all.

c0nstantientoday at 7:50 AM

[dead]

baal80spamtoday at 7:38 AM

Is this a serious comment? It honestly reads like the last famous words.

Of course there are risks.