logoalt Hacker News

OpenAI agrees with Dept. of War to deploy models in their classified network

1282 pointsby eoskxtoday at 2:59 AM603 commentsview on HN

https://xcancel.com/sama/status/2027578652477821175

https://fortune.com/2026/02/27/openai-in-talks-with-pentagon...


Comments

Imnimotoday at 3:36 AM

I don't see how OpenAI employees who have signed the We Will Not Be Divided letter can continue their employment there in light of this. Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement. The only plausible explanation is that there is an understanding that OpenAI will not, in practice, enforce the red lines.

show 26 replies
blueblisterstoday at 5:07 AM

My knee-jerk reaction to this was looks like an opportunistic maneuver that Sam is known for and I'm considering canceling my subscriptions and business with OpenAI

But what's the most charitable / objective interpretation of this?

For example - https://x.com/UnderSecretaryF/status/2027594072811098230

Does it suggest that determination of "lawful use" and Dario's concerns falls upon the government, not the AI provider?

Other folks have claimed that Anthropic planned to burn the contentious redlines into Claude's constitution.

Update: I have cancelled my subscriptions until OpenAI clarifies the situation. From an alignment perspective Anthropic's stand seems like the correct long-term approach. And at least some AI researchers appear to agree.

show 3 replies
gabehtoday at 6:45 AM

It's only $200 from me for the remainder of the year but you're not getting it anymore OpenAI. Voting with my wallet tonight. Really sad, I've followed OpenAI for years, way before ChatGPT. It's just too hard to true up my values with how they've behaved recently. This sucks. Goodnight everyone.

show 4 replies
quantumwannabetoday at 4:27 AM

More details on the difference between the OpenAI and Anthropic contracts from one of the Under Secretaries of State:

>The axios article doesn’t have much detail and this is DoW’s decision, not mine. But if the contract defines the guardrails with reference to legal constraints (e.g. mass surveillance in contravention of specific authorities) rather than based on the purely subjective conditions included in Anthropic’s TOS, then yes. This, btw, was a compromise offered to—and rejected by—Anthropic.

https://x.com/UnderSecretaryF/status/2027566426970530135

> For the avoidance of doubt, the OpenAI - @DeptofWar contract flows from the touchstone of “all lawful use” that DoW has rightfully insisted upon & xAI agreed to. But as Sam explained, it references certain existing legal authorities and includes certain mutually agreed upon safety mechanisms. This, again, is a compromise that Anthropic was offered, and rejected.

> Even if the substantive issues are the same there is a huge difference between (1) memorializing specific safety concerns by reference to particular legal and policy authorities, which are products of our constitutional and political system, and (2) insisting upon a set of prudential constraints subject to the interpretation of a private company and CEO. As we have been saying, the question is fundamental—who decides these weighty questions? Approach (1), accepted by OAI, references laws and thus appropriately vests those questions in our democratic system. Approach (2) unacceptably vests those questions in a single unaccountable CEO who would usurp sovereign control of our most sensitive systems.

> It is a great day for both America’s national security and AI leadership that two of our leading labs, OAI and xAI have reached the patriotic and correct answer here

https://x.com/UnderSecretaryF/status/2027594072811098230

show 6 replies
cube00today at 3:45 AM

If the redlines are the same how'd this deal get struck?

ChatGPT maker OpenAI has the same redlines as Anthropic when it comes to working with the Pentagon, an OpenAI spokesperson confirmed to CNN.

https://edition.cnn.com/2026/02/27/tech/openai-has-same-redl...

show 4 replies
spprashanttoday at 5:01 AM

Just uninstalled the app and canceled subscription. OpenAI can't justify their insane valuation without an user base. Especially when there are capable models elsewhere.

deauxtoday at 3:26 AM

All OpenAI employees during the board revolt that vouched for sama's return are personally responsible.

show 1 reply
KronisLVtoday at 12:58 PM

In an imaginary world, this would be a precursor to Anthropic coming to EU in a greater capacity and teaming up with Mistral, eventually leading to similar innovation and progress that DeepSeek forced upon the West, benefitting everyone in the long run. They seem to have the morals for it and the respect for human rights and life given their recent announcement (after some backtracking), unlike OpenAI. Sadly, that's not the real world.

show 1 reply
Jcampuzano2today at 3:49 AM

I would put bets on the issue probably being that it was pointed out that Anthropic's models were used to assist the raid in Venezuela, Anthropic then aggressively doubled down on their rules/principles and the DOD didn't like being called out on that so they lashed out, hard.

If theres anything this admin doesn't like, its being postured against or called out by literally anyone, especially in public.

show 1 reply
push0rettoday at 3:24 AM

So they agreed to the same red lines that had earlier led to the fallout with Anthropic? Kind of strange.

show 8 replies
davidwtoday at 3:52 AM

We need some kind of group like "tech people with morals". I'm done with these people and their corruption and garbage.

show 6 replies
ozgungtoday at 9:41 AM

Do I understand this correctly:

An algorithm, an ML model trained to predict next tokens to write meaningful text, is going to KILL actual humans by itself.

So killing people is legal,

Killing people by a random process is legal,

A randomized algorithm deciding on who to kill is legal,

And some of you think you are legally protected because they used the word “domestic”?

show 4 replies
tintortoday at 6:39 AM

Difference from Anthropic's deal is:

- OpenAI is ok with use of their AI for autonomous weapons, as long as there is "human responsibility"

- Anthropic is not ok with use of their AI for autonomous weapons

show 1 reply
pbnjaytoday at 4:51 AM

I had kept my Plus subscription just because I was lazy, and it was inexpensive and convenient… but this turn definitely helped me get off the fence. I am exporting and deleting my data now, and the cancellation is already done.

bodobolerotoday at 5:27 PM

I canceled my ChatGPT subscription and switched to Lumo Plus subscription https://lumo.proton.me/about I also considered https://mistral.ai/products/le-chat

Both are based in Europe but Proton Lumo has the better privacy promises.

Would be interested in experiences of others with those alternatives for question/answering type research (not for coding for which there exist other, better alternatives like Gemini and Claude)

show 1 reply
fiatpandastoday at 5:50 AM

>human responsibility for the use of force, including for autonomous weapon systems

So there’s the difference, and an erasure of a red line. OpenAI is good with autonomous weapon systems. Requiring human responsibility isn’t saying much. Theres already military courts, rules of engagement, and international rules of war.

ttrashhtoday at 5:11 AM

Cancel your subscription. It's the least you can do.

adangerttoday at 10:21 AM

Let me reiterate some points for people here:

Income and revenue sources always, inevitably, and without fail, determine behavior.

show 1 reply
taway1874today at 8:02 PM

Well ... bumped up my Claude subscription from Pro to Max and closed out my OpenAI accounts. It's a drop in the ocean but I'll sleep better knowing I did the right thing. Thanks ChatGPT! It was good knowing you.

fabbbbbtoday at 7:35 PM

Anyone having success with exporting data from ChatGPT? Got the export email 11 hours ago but still no download link..

dgxyztoday at 8:59 AM

Sam Altman being a complete bell end? Who'd have thought it.

I hope everyone goes and works for Anthropic and OpenAI collapses.

Markets going to be interesting on Monday. This plus a war. Urgh.

pu_petoday at 7:27 AM

So this week we've learned that even the government asseses Anthropic has the better model, and that OpenAI leadership has no concern for safety whatsoever.

operator_niltoday at 3:42 AM

So does this mean that OpenAI will give whatever the DoD asks for and they will pinky swear that it won’t be used for mass surveillance and autonomous killing machines?

show 1 reply
AbstractH24today at 4:14 AM

It’s amazing how quickly the players keep shifting here.

Yesterday and the day before sentiment seemed to be focused on “Anthropic selling out”, then that shifted to “Anthropic holds true to its principles in a David vs Goliath” and “the industry will rally around one another for the greater good.” But suddenly we’re seeing a new narrative of “Evil OpenAI swoops in to make a deal with the devil.”

Reminds of that weekend where Sam Altman lost control of OpenAI.

show 2 replies
slibhbtoday at 4:18 AM

I'm unsure how to feel about this whole dust-up. It doesn't seem like much has changed in substance. Maybe OpenAI outmaneuvered Anthropic behind the scenes. Possibly Anthropic was seen as not behaving deferentially enough towards the government. But this administration has proven comically corrupt, so it wouldn't surprise me if money was involved. Will be interested to see what journalists turn up.

vander_elsttoday at 10:52 AM

Subscribers should be aware what they are supporting. I think that keeping an OpenAI account can be considered an active support of this decision, at least for private people who can easily change providers.

gammaratortoday at 4:09 PM

I would not be surprised if Sam A. helped engineer this whole situation… “Child’s play,” like replacing a reddit ceo.

show 1 reply
kledrutoday at 10:25 AM

Sorry, despite the public statements of some sort of solidarity with Anthropic by sama this looks like a plot to take over from losing position.

Sadly it would be very difficult for Anthropic to relocate to another country with their IP, models, and infrastructure.

(Guess I need to build everything I intended this year in a weekend.)

jordanscalestoday at 3:23 AM

This is awkward? https://news.ycombinator.com/item?id=47188473

show 2 replies
iainctduncantoday at 4:06 AM

Did anyone ever doubt sama would just follow the money?

weasels gonna weasel

mmanfrintoday at 5:02 AM

Absolute disgrace of a person and organization.

nahuel0xtoday at 8:53 PM

Remember that the US administration is supporting Israel on the ethnic-cleansing and genocide of Gaza, using Palantir technology and AI systems that generate kill lists. It's "IBM and the Holocaust" all over again.

rich_sashatoday at 3:33 AM

Is the Pentagon signing a EULA confirming all their data will now be used, anonymised, for improving the service?

show 1 reply
matsemanntoday at 7:47 AM

From an open non-profit to a war machine in such a short time is baffling.

e40today at 7:30 AM

This is how OpenAI gets bailed out in an AI crash, too big to fail becomes too important to fail.

corfordtoday at 3:53 AM

If you're unhappy with this, an immediate way to signal it is with your wallet. In my case I've just uninstalled chatgpt from my phone, cancelled my subscription and will up my spend with anthropic.

show 15 replies
deadbolttoday at 5:33 AM

Choosing to go along with calling it the "Department of War" tells you all you need to know.

show 1 reply
wannabe_losertoday at 7:18 AM

I guess we aren't curing cancer with ai anymore

TeeWEEtoday at 1:51 PM

If you work at OpenAI, leave now while you can.

jdiaz97today at 5:10 AM

cancelling my openai subscription, they're gonna miss my 20 USD

jaybrendansmithtoday at 6:29 PM

What part of "These people are fascists, and need to be stopped" are people failing to understand?

show 1 reply
insane_dreamertoday at 4:23 AM

I'm never using an OpenAI model or Codex ever again. Period. Idaf whether it scores better than Claude on benchmarks or not.

This is a red line for me. It's clear OpenAI has zero values and will give Hesgeth whatever he wants in exchange for $$$.

https://www.nytimes.com/2026/02/27/technology/openai-reaches...

SpaceL10ntoday at 10:51 AM

Does deploying these models in "the classified network" also mean this technology is going to be used to help kill people?

impulser_today at 5:07 AM

For the people that don't understand how they got a deal with the same redlines, it probably because OpenAI agreed to not question them. The safeguards are there, both parties agree now fuck off and let us use your model how we see fit.

Anthropic probably made the mistake of questioning the Military's activities related to Claude after the Venezuela mission and wanted reassurance that the model wouldn't be used for the redlines, and the military didn't like this and told them we aren't using your models unless you agree to not question us and then the back and forth started.

In the end, we will probably have both OpenAI and Anthropic providing AI to the military and that's a good thing. I don't think they will keep the supply chain risk on Anthropic for more than a week.

show 2 replies
bambaxtoday at 9:33 AM

> In all of our interactions, the DoW displayed a deep respect for safety

Right. Pete "FAFO" Hegseth is a model of intelligence, moderation, and respect for due process. Nothing to see here.

throwaway20261today at 11:14 AM

It is quite shocking that almost all AI companies are saying "we are not ok with domestic surveillance" but they'll happily sign up to surveilling the rest of the world population.

So by that measure the US govt can go get some Israeli software to surveill their domestic populace!

Homo sapiens deserve to become extinct.

levantentoday at 7:16 AM

Funny that these are the same people that have been blasting the alarm on dangers of AI singularity. Now they cannot wait to put their tools in weapons.

🔗 View 50 more comments