logoalt Hacker News

denysvitaliyesterday at 4:59 PM5 repliesview on HN

Why is this surprising? Isn't it mandatory for chinese companies to do adhere to the censorship?

Aside from the political aspect of it, which makes it probably a bad knowledge model, how would this affect coding tasks for example?

One could argue that Anthropic has similar "censorships" in place (alignment) that prevent their model from doing illegal stuff - where illegal is defined as something not legal (likely?) in the USA.


Replies

woodrowbarlowyesterday at 5:05 PM

here's an example of how model censorship affects coding tasks: https://github.com/orgs/community/discussions/72603

show 5 replies
TulliusCiceroyesterday at 7:46 PM

There's a pretty huge difference between relatively generic stuff like "don't teach people how to make pipe bombs" or whatever vs "don't discuss topics that are politically sensitive specifically in <country>."

The equivalent here for the US would probably be models unwilling to talk about chattel slavery, or Japanese internment, or the Tuskegee Syphilis Study.

show 2 replies
behnamohyesterday at 5:17 PM

> Why is this surprising?

Because the promise of "open-source" (which this isn't; it's not even open-weight) is that you get something that proprietary models don't offer.

If I wanted censored models I'd just use Claude (heavily censored).

show 3 replies
nonethewiseryesterday at 6:10 PM

It's not surprising. It is a major flaw.

indymikeyesterday at 6:43 PM

It is not surprising, it is disappointing.