logoalt Hacker News

WarmWashyesterday at 4:19 PM2 repliesview on HN

From a level headed outside perspective

It looks like Anthropic likely wanted to be able to verify the terms on their own volition whereas OpenAI was fine with letting the government police themselves.

From the DoD perspective they don't want a situation, like, a target is being tracked, and then the screen goes black because the Anthropic committee decided this is out of bounds.


Replies

GorbachevyChaseyesterday at 5:15 PM

I don’t know why more people don’t see this. It’s a matter of providing strong guarantees of reliability of the product. There is already mass surveillance. There is already life taking without proper oversight.

show 1 reply
syllogismyesterday at 6:35 PM

Government's free to not like the terms and go with another provider. That's whatever.

Government's not free to say, "We'll blow up your business with a false accusation if you don't give us the terms we want (and then use defence production act to commandeer the product anyway)". How much more blatantly authoritarian does it get than that?