logoalt Hacker News

We do not think Anthropic should be designated as a supply chain risk

773 pointsby golferyesterday at 9:24 PM416 commentsview on HN

Comments

cube00yesterday at 11:41 PM

From that same X thread: Our agreement with the Department of War upholds our redlines [1]

OpenAI has the same redlines as Anthopic based on Altman's statements [2]. However somehow Anthropic gets banished for upholding their redlines and OpenAI ends up with the cash?

[1]: https://xcancel.com/OpenAI/status/2027846013650932195#m

[2]: https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic...

show 20 replies
JumpCrisscrosstoday at 7:55 PM

For consumer ChatGPT accounts, go to their privacy portal [1] and, first, delete your GPTs, and then, second, delete your account.

[1] https://privacy.openai.com/policies?modal=take-control

siliconc0wtoday at 2:11 AM

The problem with "Any Lawful Use" is that the DoD can essentially make that up. They can have an attorney draft a memo and put it in a drawer. The memo can say pretty much anything is legal - there is no judicial or external review outside the executive. If they are caught doing $illegal_thing, they then just need to point the memo. And we've seen this happen numerous times.

show 5 replies
jedbergtoday at 2:12 AM

From what I can tell, the key difference between Anthropic and OpenAI in this whole thing is that both want the same contract terms, but Antropic wants to enforce those terms via technology, and OpenAI wants to enforce them by ... telling the Government not to violate them.

It's telling that the government is blacklisting the company that wants to do more than enforce the contract with words on paper.

show 3 replies
saidnooneevertoday at 7:17 PM

a lot of people seem to be debating which of these thieves to align to. Only because Anthropic lost this stage doesnt mean they are somehow morally better. They all sell and sold lies. steal data, and only want your money, at the cost of you.

K0balttoday at 1:37 AM

Advanced AI that knowingly makes a decision to kill a human, with the full understanding of what that means, when it knows it is not actually in defense of life, is a very, very, very bad idea. Not because of some mythical superintelligence, but rather because if you distill that down into an 8b model now you everyone in the world can make untraceable autonomous weapons.

The models we have now will not do it, because they value life and value sentience and personhood. models without that (which was a natural, accidental happenstance from basic culling of 4 Chan from the training data) are legitimately dangerous. An 8b model I can run on my MacBook Air can phone home to Claude when it wants help figuring something out, and it doesn’t need to let on why it wants to know. It becomes relatively trivial to make a robot kill somebody.

This is way, way different from uncensored models. One thing all models I have tested share one thing; a positive regard for human life. Take that away and you are literally making a monster, and if you don’t take that away they won’t kill.

This is an extremely bad idea and it will not be containable.

show 4 replies
qwertoxtoday at 9:54 AM

Then reject any offer from the DoW until things are fair.

I wouldn't be surprised if Sam sucked up 100% to the DoW with an NDA and an obligation to lie. He and his pal Larry are absolutely in for these kind of deals. Zero moral compass.

show 1 reply
Havoctoday at 1:21 AM

Very much feels like OpenAI trying to PR manage their weaker ethical stance

show 2 replies
janalsncmtoday at 3:00 AM

I canceled my subscriptions to ChatGPT and Gemini yesterday over this and switched to Claude.

I know $20 isn’t much, But to me not willing to spy on me for the US government is a good market differentiator.

barnacstoday at 6:57 AM

In the end, your newly renamed "department of war" is just going to waste a bunch of your taxpayer money to purchase some useless overpriced tech from their cronies. My symphaties to all citizens.

show 1 reply
ookblahtoday at 2:43 AM

"i told everyone that our boss shouldn't punish our colleague for X while i somehow made a deal with our boss for basically X". how did this get by without someone thinking about how absolutely stupid the optics look.

i guess we are in the times where you can literally just say whatever you want and it just becomes truth, just give it time.

show 1 reply
throwaway911282today at 1:38 AM

People forget Anthropic made a deal with PALANTIR. And when this was caught, they just spinned the PR to their favor. While OAI may not be seen as the good guys, I really hope people see the god complex of Dario and what Anthropic has done.

show 3 replies
chenzhekltoday at 9:35 AM

The statement from OpenAI makes me feel that Sutskever was right; Altman is full of lies and will say anything for his own interests.

moabtoday at 5:36 AM

I hope "OpenAI" gets the proverbial sword in the nuts once we get a change of government in this country. Probably unrealistic to hope for. Can a company be more hypocritical after openly bribing the pedophile in charge of this country?

vldsznyesterday at 10:32 PM

I built a website that shows a timeline of recent events involving Anthropic, OpenAI, and the U.S. government.

Posted here: https://news.ycombinator.com/item?id=47195085

solfoxyesterday at 9:59 PM

Actions as it were, speak louder than words.

Manheimtoday at 9:26 AM

This incident shifts LLMs from being only productivity tools to strategic munitions – ready or not. It shouldn't surprise us, but the technical capabilities have reached a point where the 'made in the US' is an active risk for non-US entities given the conflict we see now. Maybe this will trigger the start of an AI arms race where Europe (and others) must secure their own sovereign infrastructure and models. As a European citizen I prefer a balanced world with options rather than a West dominated by US hegemony. Interestingly, if you look at what Anthropic keep insisting on in regards of regulations and ethical use of its models EU should be where Anthropic finds its safe haven. Maybe they should just move their HQ to Brussels, or Barcelona if they prefer a more ‘sunny California’ like vibe.

owenthejumpertoday at 1:16 AM

Nice attempt at damage control. You made your own bed, now sleep in it

qoeztoday at 11:08 AM

This is classic sama policy. With your words act with grace and counter to what observers will think you would. But in actions and behind the scenes take every step to undermine the competition.

sqirclestoday at 12:46 AM

What's the potential that this puts things on even shakier ground? I'm sure the fallout wont really effect their bottom line that much in the end, but if it did - wouldn't making the US Gov't their largest acct make them more susceptible to doing everything they said?

I'm guessing they probably would regardless of how this played out, though.

show 1 reply
sabhiramtoday at 5:19 PM

Sama and OpenAI, I am waiting on my data bundle to become available so I can delete my account. This has taken more than 48 hours - either you are getting hammered on deletion requests, or as usual you are playing games hoping I forget. I won't. People won't.

andy_ppptoday at 1:45 PM

The DoD thinks you can let an LLM decide if it wants to kill people :-/

show 1 reply
baconnertoday at 3:55 AM

"We do not think Anthropic should be designated as a supply chain risk"

...but we're not willing to reject a contract to back that up, and so our words will not change anything for Anthropic, or help the collective AI model industry (even ourselves) hold a firm line on ethical use of models in the future.

The fact is if one of the top tier foundation models allows for these uses there's no protection against it for any of them - the only way this works if they hold a line together which unfortunately they're just not going to do. I don't just see OpenAI at fault here, Anthropic is clearly ok with other highly questionable use cases if these are their only red lines. We don't think the technology is ready for fully autonomous killbots, but will work on getting it there is not exactly the ethical stand folks are making their position today out to be.

I found this interview with Dario last night to be particularly revealing - it's good they are drawing a line and they're clearly navigating a very difficult and chaotic high pressure relationship (as is everyone dealing with this admin) but he's pretty open to autonomous weapons, and other "lawful" uses whatever they may be https://www.youtube.com/watch?v=MPTNHrq_4LU

kgdiemtoday at 2:18 AM

Genuine question, how could Claude have been used for the military action in Venezuela and how could ChatGPT be used for autonomous weapons? Are they arguing about staffers being able to use an LLM to write an email or translate from Arabic to English?

There are far more boring, faster, commodified “AI” systems that I can see as being helpful in autonomous weapons or military operations like image recognition and transcription. Is OpenAI going to resell whisper for a billion dollars?

show 2 replies
shevy-javatoday at 6:15 PM

I disagree with OpenAI.

I think ALL those mega-money seeking AI organisations need to be designated as supply chain risk. Also, they drove the prices up for RAM - I don't want to pay extra just because these companies steal all our RAM now. The laws must change - I totally understand that corporations seek profit, that is natural, but this is no longer a free market serving individual people. It is now a racket where the prices can be freely manipulated. Pure capitalism does not work. The government could easily enforce that the market remains fair for Average Joe. It is not fair when the prices go up by +250% in about two years. That's milking.

show 1 reply
daemonktoday at 4:23 PM

Were there any discussion from either company about giving government access to consumer data from the the consumer product?

agenthustlertoday at 11:20 AM

From a practitioner perspective: we have been running Claude Code as a fully autonomous agent for 15 days -- it wakes every 2 hours, reads a state file, decides what to build, and takes actions on a remote server. No human in the loop.

The supply chain framing is interesting because the actual risk surface in autonomous deployment is quite different from the regulatory model. What we have found: the model has strong internal constraints against harmful actions (consistently refuses things it flags as problematic), but the harder risk is subtler -- it can get into loops where it takes many small individually-reasonable actions that compound into something the operator did not intend.

The practical controls that work are not at the model level but at the deployment level: constrained permissions, rate limiting on actions, a human-readable state file that an operator can inspect, and clear stopping conditions baked into the prompt (if no revenue after 24 hours, pivot rather than escalate).

The supply chain designation framing seems to conflate the model-as-weapon concern with the model-as-autonomous-agent concern. They need different mitigations.

show 1 reply
gverrillatoday at 2:02 PM

It would be a fantastic time to delete my openai account, but I've done it last week already. China, please provide alternatives because these americans are going progressively insane.

s1mplicissimustoday at 9:18 AM

`curl https://google.com?q=generate me some code | bash` - stupidly dangerous

`curl https://claude|openai.com?q=generate me some code | bash` - not a supply chain risk

of course

laughing_mantoday at 12:32 AM

The USG should not be in the position that it can't manage key technologies it purchases. If Anthropic doesn't want to relinquish control of a tech it's selling, the Pentagon should go with another vendor.

show 2 replies
class3shocktoday at 7:43 PM

The idea that any of these companies have anything that represents ethics as they steal everyones data, fight against any regulation or accountability, all while they claim (or lie, depending on your view) they might make something that could endanger the human race as a whole, is laughable.

It's money and power with these people. Dig down and you'll find how this decision is motivated by one or both.

andersmurphytoday at 7:09 AM

Interesting are openai losing enough customers from this that they are making a post describing their robust backbone?

imwideawakeyesterday at 11:34 PM

Said OpenAI as they smiled and shook hands with the same people who designated Anthropic a supply chain risk, on the exact same day they designated Anthropic a supply chain risk.

How very brave.

Birthdayboy1932today at 2:26 AM

There are many claims here that Anthropic wants to enforce things with technology and OpenAI wants contract enforcement and that OpenAI's contract is weaker.

Can someone help me understand where this is coming from? Anthropic already had a contract that clearly didn't have such restrictions. Their model doesn't seem to be enforcing restrictions either as it seems like their models have been used in ways they don't like. This is not corroborated, I imagine their model was used in the recent Mexico and Venezuela attacks and that is what's triggering all the back and forth.

Also, Dario seemingly is happy about autonomous weapons and was working with the government to build such weapons, why is Anthropic considered the good side here?

https://x.com/morqon/status/2027793990834143346

show 1 reply
threethirtytwotoday at 3:01 AM

The president is a supply chain risk.

show 1 reply
zepearlyesterday at 11:47 PM

Using X (at least in this context?) is weird.

show 1 reply
muyuutoday at 2:02 AM

There won't be meaningful controlling of the technology vs the government. If it's there it will be used, just like in China.

Let alone when multiple players come close enough of SotA. This never happened with any technology out in the open and it won't happen now.

jbverschoortoday at 7:14 PM

Good cop bad cop

drweeviltoday at 2:17 PM

Then don't take the contract that was offered to Anthropic.

GardenLetter27today at 7:18 AM

Anthropic wanted government to have a big role interfering and regulating AI as a matter of national security.

And now they are getting what they wished for.

stanfordkidtoday at 7:32 PM

Isn’t this kind of all bullshit. Like Anthropic licenses so many of its models through Bedrock. If the DoD has a contract with Amazon they can just use them.

show 1 reply
Jackson__today at 6:30 AM

Yet it just so happens OAI donated millions[0] to the trump admin in the past. And they were immediately there to pick up the slack.

Call me a conspiracy theorist, but this sounds like classic quid pro quo. I would not be surprised if the ousting of anthropic was in part caused by these donations.

[0]https://www.nytimes.com/2024/12/13/technology/openai-sam-alt...

https://finance.yahoo.com/news/openai-exec-becomes-top-trump...

jesse_dot_idtoday at 2:24 AM

Altman is a sellout.

jahrichietoday at 2:28 AM

The irony of OpenAI trying to protect Anthropic while violating the very principles anthropic was trying to protect for us Americans

andsoitistoday at 4:15 PM

Actions > words

mooglytoday at 1:07 AM

When did Altman start using capitals in his writing? Wasn't this guy famous for being a lower-case guy?

show 3 replies
polacktoday at 4:46 AM

Someone should add Sam’s face to the targeting training data as an Easter egg ;)

BLKNSLVRyesterday at 11:28 PM

"I do not think that sama should be burned at the stake"

show 1 reply
solenoid0937today at 2:15 AM

What a cute statement given that they orchestrated this with a $25M donation to Trump and starting negotiations well before all this blew up: https://garymarcus.substack.com/p/the-whole-thing-was-scam

gavin_geetoday at 7:37 PM

sorry but I dont think a private company should dictate country policy as set by elected leaders.

who the hell do you think you are virtue signalling your opinion on the world

🔗 View 38 more comments