logoalt Hacker News

OpenAI agrees with Dept. of War to deploy models in their classified network

1325 pointsby eoskxyesterday at 2:59 AM612 commentsview on HN

https://xcancel.com/sama/status/2027578652477821175

https://fortune.com/2026/02/27/openai-in-talks-with-pentagon...


Comments

jaybrendansmithyesterday at 6:29 PM

What part of "These people are fascists, and need to be stopped" are people failing to understand?

show 1 reply
darkstarsysyesterday at 1:12 PM

All of this, the news articles, the social media discussion, this very discussion, will be part of the training set for future AIs. What will they learn from this?

kseniamorphyesterday at 10:24 AM

Is there anyone who really understands what’s different about the OpenAI agreement? Or maybe these are just Sam Altman’s public statements that don’t actually reflect the real terms of the deal. I honestly can’t figure it out.

imwideawakeyesterday at 10:29 AM

Google, OpenAI, and Anthropic should all have each other's backs when it comes to hard lines like this. Sam can say whatever he wants, but signing this deal on the same day Trump and Hegseth went scorched earth on Anthropic — for standing up for the very values OpenAI claims to hold — is sleazy.

Screw Sam, and screw OpenAI. I've been a customer of theirs since the first month their API opened to developers. Today I cancelled my subscription and deleted my account.

I'd already signed up for Claude Max and had been slow to cancel my OpenAI subscriptions. This finally made the decision easy.

lm28469yesterday at 9:36 AM

> OpenAI CEO Sam Altman shares Anthropic’s concerns when it comes to working with the Pentagon

The same day:

Pssst psst Samy Samy, come here we have money and data psst

> Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.

elAhmoyesterday at 3:50 AM

All that money and not a single ounce of integrity.

peteeyesterday at 11:17 AM

This explains the "Free Codex" offer i just got in my email

nahuel0xyesterday at 8:53 PM

Remember that the US administration is supporting Israel on the ethnic-cleansing and genocide of Gaza, using Palantir technology and AI systems that generate kill lists. It's "IBM and the Holocaust" all over again.

vorticalboxyesterday at 3:49 PM

> prohibitions on domestic mass surveillance

so foreign mass surveillance is all good?

superkuhyesterday at 3:37 AM

I have just canceled all services and deleted my account with OpenAI. They can get money from the current US regime but I will not contribute to their violations of the constitution.

jstummbilligyesterday at 6:26 AM

> Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement.

Under normal circumstances, that would seem really plausible. But given how far Trump continues to go just out of spite and to project power, it actually is the opposite.

I am fully prepared to believe that they got absolutely nothing else out of it (to date).

show 1 reply
interestpiquedyesterday at 3:46 AM

What a snake

m4rtinkyesterday at 4:23 AM

So this is indeed how OpenAI survives (a little bit longer ?) - government bailout.

redmlyesterday at 8:11 AM

regardless of your opinion of ai in government, sam could not have picked a worse way for optics to swoop in and make a deal. it just looks incredibly bad.

otterleyyesterday at 2:57 PM

The stories I’ve been reading say that the DoW’s agreement with OpenAI contain the very same limitations as the agreement with Anthropic did. In other words, they pressured Anthropic to eliminate those restrictions, Anthropic declined, then they made a huge fuss calling them “a radical left, woke company,” put them on the supply-chain risk list, then went with OpenAI even though OpenAI isn’t changing anything either.

The whole story makes no sense to me. The DoW didn’t get what they wanted, and now Anthropic is tarred and feathered.

https://www.wsj.com/tech/ai/trump-will-end-government-use-of...

“OpenAI Chief Executive Sam Altman said the company’s deal with the Defense Department includes those same prohibitions on mass surveillance and autonomous weapons, as well as technical safeguards to make sure the models behave as they should.”

d--byesterday at 4:10 AM

At this stage, everything OpenAi does is to try to keep investors investing.

They’re willing to let their brand go to trash for this government contract.

Pretty much every American is standing with Anthropic on this. No one left or right wants mass surveillance and terminators. In fact, no one in the world wants this, except the US military.

But Altman seems so desperate to keep the cash coming he’s ready to do anything.

DebtDeflationyesterday at 1:12 PM

At this point it seems the entire AI Safety/Ethics debate was nothing more than a Marketing campaign to hype up the capabilities of the models - get people to think that if they're potentially dangerous that must mean they're so capable and they need to sign up for a subscription.

owenthejumperyesterday at 12:10 PM

Well in the end this is great news - this virtually guarantees Anthropic win in the court.

LarsDu88yesterday at 5:57 AM

China has evacuated its embassies in Iran.

This is really about the imminent strike on Iran which is now super telegraphed. They are gonna use ChatGPT for target selection, and the likely outcome is that it will fuck things up and a bunch of civilians are going to die because of this decision.

When this happens, Altman will go from being merely a drifter to having blood on his hands.

show 2 replies
strayduskyesterday at 3:56 AM

I know the reaction to this, if you're a rational observer, is "OpenAI have cut corners or made concessions that Anthropic did not, that's the only thing that makes sense."

However, if you live in the US and pay a passing attention to our idiotic politics, you know this is right out of the Trump playbook. It goes like this:

* Make a negotiation personal

* Emotionally lash out and kill the negotiation

* Complete a worse or similar deal, with a worse or similar party

* Celebrate your worse deal as a better deal

Importantly, you must waste enormous time and resources to secure nothing of substance.

That's why I actually believe that OpenAI will meet the same bar Anthropic did, at least for now. Will they continue to, in the same way Anthropic would have? Seems unlikely, but we'll see.

show 1 reply
mrweaselyesterday at 10:02 AM

Didn't the department of war announce that it would be working with xAI just this past December?

hnthrowaway0315yesterday at 4:43 AM

Ah, is it the time when Skynet starts to manifest itself...

tibbydudezayesterday at 11:57 AM

While Dario is not my hero with the sometimes the outrageous things he says he has a firm moral compass and a backbone that aligns with mine and thus I will support his company and their products in my personal use and my work.

mkozlowsyesterday at 4:02 AM

So there are two possibilities here:

1. There's no substantive change. Hegseth/Trump just wanted to punish Anthropic for standing up to them, even if it didn't get them anything else today -- establishing a chilling effect for the future has some value for them in this case, after all. And OpenAI was willing to help them do that, despite earlier claiming that they stood behind Anthropic's decisions.

2. There is a substantive change. Despite Altman's words, they have a tacit understanding that OpenAI won't really enforce those terms, or that they'll allow them to be modified some time in the future when attention has moved on elsewhere.

Either way, it makes Altman look slimy, and OpenAI has aligned with Trump against Anthropic in a place where Anthropic made a correct principled stand. It's been clear for a while that Anthropic has more ethics than OpenAI, but this is more naked than any previous example.

show 1 reply
outside1234yesterday at 5:33 AM

Screw OpenAI. Never opening that app again or using one of their models.

dataflowyesterday at 3:55 AM

This seems full of loopholes.

> The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.

(1) Well, did both sides sign the agreement and is it actually effective? Or is it still sitting on someone's desk until it can get stalled long enough?

(2) What does "agreement" even mean? Is it a legally enforceable contract, or just some sort of MoU or pinkie promise?

(3) If it's a legally enforceable contract, is it equally enforceable on all of their contracts, or just some? Do they not have existing contracts this would need to apply to?

(4) What does "reflects them in law and policy" even mean? Since when does DoW make laws, and in what sense do their laws reflect whatever the agreement was? Are these laws he can point to so everyone else can see? Can he at least copy-paste the exact sentences the government agreed to?

t0loyesterday at 3:47 AM

Snakes- as predicted

rvzyesterday at 3:42 AM

Not a surprise here, that letter was a trap for OpenAI employees who filled it out with their names on it. [0]

The ones that did might as well leave. But there was no open letter when the first military contract was signed. [1] Now there is one?

[0] https://news.ycombinator.com/item?id=47176170

[1] https://www.theguardian.com/technology/2025/jun/17/openai-mi...

coryodanielyesterday at 7:52 PM

Don’t just cancel, flood them with CCPA requests.

midnitewarrioryesterday at 4:55 AM

Opportunism without principles at its finest.

brainzapyesterday at 12:53 PM

the AI datacenter built for 180B are used for surveillance and control

verdvermyesterday at 1:25 AM

If the "safety stack" (guardrails) bit is true, it's the exact opposite of their beef with Anthropic... which is not surprising given who's running the US right now.

I always assumed those folks need a way to look strong with their base for a media moment over equitable application of the policies or law.

weare138yesterday at 5:24 PM

There's was an 80s movie about this...

arendtioyesterday at 7:35 AM

So now we are waiting for Anthropic to explain to us what Sam agreed to and what they rejected.

On the surface, it looks like both rejected 'domestic mass surveillance' and 'autonomous weapon systems', but there seem to be important differences in the fine print, since one company is being labeled a 'supply chain risk' while the other 'reached the patriotic and correct answer'.

One explanation would be that the DoW changed its demands, but I doubt that. Instead, I believe OpenAI found a loophole that allows those cases under certain conditions.

robertwt7yesterday at 3:35 AM

How did they agree to the terms that were initially put forward by Anthropic but with OpenAI? Surely there’s a catch here. Or is it just Sam negotiation skill?

drivebyhootingyesterday at 4:07 AM

In my experience ChatGPT is the most sanctimonious of the leading models.

When I need advice for my clandestine operations I always reach for Grok.

tayo42yesterday at 7:02 AM

How do llms get used in either survalience or for autonomous weapons. Using written English seems so inefficient?

_zoltan_yesterday at 12:52 PM

to all the naysayers: what did all these people doing AI research expect? that the military doesn't want to use their stuff? and then when it does, Pikachu face?

I know I'll get down voted but come on, this is so very naive.

looksjjhgyesterday at 6:22 AM

So it’s personal basically

AmericanOPyesterday at 4:07 AM

Department of War just killed OpenAI's brand

dakolliyesterday at 4:01 AM

They're pretending like they didn't enter into this agreement last January and are completely entrenched in intelligence programs already. They are trying to make it look like they are stepping up in a time of need (time of need for the DoD), in reality they sold their soul to intelligence and the military a year ago.

I posted about this here after Sam made his tweet:

https://news.ycombinator.com/item?id=47189756

Source: https://defensescoop.com/2025/01/16/openais-gpt-4o-gets-gree...

FrustratedMonkyyesterday at 1:27 PM

Maybe the problem here is they are negotiating by using social media posts. Where is the team of Anthropic people, and the team of Gov people, that should be in a room somewhere doing this in private?

skygazeryesterday at 4:54 AM

Perhaps Trump's DOD objects specifically to Anthropic models themselves declining to do immoral and illegal things, and not something just stipulated in an ignorable contract. That would give room for Sam to throw some public CYA into a contract, while neutering model safety to their requirements.

_alternator_yesterday at 3:12 PM

So while Sam Altman claims that OAI received promises not to have fully automated killbot-GPT from Hegseth, so did Anthropic(!)—but it contained weasel legal language that allowed the USG to ignore the restrictions at will. (We all know how the current admin reads such language.)

So until we see the contract I think it’s fair to assume that OAI and Anthropic got roughly the same deal, with Anthropic insisting on language that actually limits the government, while OAI licked the boot and is passing it off like filet mignon.

utopiahyesterday at 6:43 AM

Oh yeah, from the company which raison d'etre was being open and being good.

shocked pikachu face

Come on by now we all know the only thing Altman (who else is still at OpenAI from the start?) wants it more money and more power, it doesn't really matter how.

webdevveryesterday at 10:36 AM

TOTAL ALTMAN VICTORY

🔗 View 39 more comments