What’s interesting is Anthropic being singled out here. That either means:
1- OpenAI, Microsoft, Google, Amazon, etc have no problem with their products being used to kill people so no need to bully them.
2- These other products are so terrible at the task that the clown shoe wearing SecDef is forced to try to bully Anthropic.
My belief is they are terrified of China and this seems evident when you take into account the moves they're making with Venezuela, Iran, and the increased adoption of authoritarian tactics. We're trying to play catch-up with China's rapid rise as a super-power and the AI infrastructure is one of the few major developments we still have control over, for now. I sympathize with Dario, he's stuck in a very bad position on this. We do not want China to operate on this level while we sit back with one hand tied behind our backs. On the other hand, this administration is making extremely poor decisions and arguably causing extensive harm domestically and internationally, so it's a lose-lose situation for Dario really.
On the one hand it's fantastic that people are resisting and, if nothing else, raising awareness and buying time.
On the other hand, is autonomous war not obviously the endgame, given how quickly capabilities are increasing and that it simply does not require much intelligence (relatively speaking) to build something that points a gun at something and pulls a trigger?
It just needs one player to do it, so everyone has to be able to do it. I'd love to hear about different scenarios scenario.
Related ongoing thread:
Statement from Dario Amodei on our discussions with the Department of War - https://news.ycombinator.com/item?id=47173121 - Feb 2026 (1405 comments)
NDAA (The "Huawei Rule") is for cases when a foreign entity has infiltrated or taken over the company in question.
But DOD wants to use Anthropic so is really confirming that there is no foreign entity issues. They want to use it.
So to use NDAA (The "Huawei Rule"), is nakedly false and being used as a punishment.
Which if allowed to happen, could be used against any US Corporation to enforce compliance with the regime..
Just want to note the emergence of a two-tiered imbalance. Frontier AI providers are stacking the guardrails so high that everyday citizens can't even ask an LLM what boobs are, but simultaneously providing government with AI lacking guardrails around "any lawful purpose".
That's fundamentally antidemocratic and it normalizes the departure from the Western Enlightenment standard of, "the same law governs everyone".
I don't have a lot of hope here. When most of the creme de la creme of the billionaire class capitulated to Trump at the beginning of his term, that set the tone for everything that followed IMO. It's astounding to me that so many are willing to see him trample on the Constitution and separation of powers when they'd scream like stuck pigs if any other party attempted it. And that's the way a lot of influential Americans like it I guess. Like I said, not a lot of hope. YMMV.
Yeah it is. The Military has put itself in the position of arguing for mass surveillance and autonomous weapons. In what way can that be spun as a positive.
They are arguing to do things that shouldn't be allowed anyway.
I can't believe how many people take the anthropic statement at face value. You need to concentrate on what they are implicitly acknowledging. They will spy on non us citizens. How philanthropic
edit: how about the downvoters give a counterargument instead of trying to bury this comment?
Anthropic has an excellent balance sheet. It basically has fuck you money that would let it walk away from the federal trough without existential risk. And hopefully extra dollars from users like me could compensate and then some in the fullness of time.
Article doesn’t demonstrate a good understanding of DoW’s relationship with contractors. Anthropic wanted those sweet, sweet, taxpayer dollars—well, this is what happens when you make a Faustian bargain.
> One option is to invoke the Defense Production Act. . .
> Another threat would be to declare Anthropic to be a supply chain risk. . .
The first is a wrist-slap that still gets the government what they want; the second is an existential threat to Anthropic. Their main partners are all “dogs of the military”. Microsoft, Intuit, NVIDIA: all government contractors. I can’t find one company that they have a working relationship with that doesn’t hold at least one govt contract.
The idea that Claude could alignment fake its way out of a change in contractual terms is silly. The DoW has all sorts of legal and administrative tools it can choose to leverage against contractors that fail to perform. Usually it doesn’t, because of a “norm” that says the private defense sector runs more smoothly when the government doesn’t try to micromanage it.
Remind me again how good this administration is at upholding norms?
This whole standoff could set a very important precedent of the Trump administration not getting what they want, and not in a "maneuvered out of the news spotlight" kind of way (e.g. Greenland), but in a public "FUCK OFF right in your face" kind of way.
The worst that can happen to Anthropic is one of the two things mentioned; loosing some contracts or some fake forced management from the Pentagon. maybe Dario having to leave, certainly a loss for him and people who believe in him but probably nothing world-changing.
The worst that can happen to the Trump administration is the beginning of its end, when people realize you can simply stand up to their bullying and with all the standoffs they have going on in parallel, maybe they will die a death by a thousand cuts?
Everything about this situation is absolutely bonkers. Marking a US company as a supply chain risk hasn't been done before AFAIK, and is a guaranteed end of the company.
It's the US government basically unilaterally deciding to end a leading AI researcher company. Years of lawsuits will follow, comparisons to "communism", accusations of Trump/Heghseth being Chinese/Russia agents (because well, how else do you hand over the AI win to China than by killing one of your top 2?)
I agree. This is a spectacular mistake. Anthropic has the best "AI" on the planet. Anthropic can spin up a giant "Claude" and plan rings around the Pentagon. DoD better get used to losing that fight.
the Pentagon is the name of a building (pretty much a very large bikeshed). I see the actual agency is named by the author as the Defense Department and one of the officials in question is a Defense Secretary. Interestingly, the bikeshed itself has its own spokespeople.
Imagine one of the defense primes telling the DoW, "We won't build you these planes, they're just too darned lethal!"
My read of this interaction is Dario is calling out Hegseths' bluff. A bluff the latter didn't even know he was blundering into because Hegseth is an idiot.
SecDef invoking the DPA against Anthropic likely trashes the AI fundraising market, at least for a spell. That's why OpenAI is wading into the fight [1]. Given the Dow is sitting on a rising souffle of AI expectations, that knocks it out as well. And if there is one red line Trump has consistently hewed to and messaged on, it's in not pissing off the Dow.
[1] https://www.axios.com/2026/02/27/altman-openai-anthropic-pen...
This frames it as Pentagon vs. Anthropic but the actual problem is upstream. If we tell companies they must prevent all possible harm, you're setting them up: nerf the model and silently lose value nobody can quantify, or don't nerf the model and get blamed for every bad outcome. We don't want nerf'd models either. DoW is saying that.
Use of the DPA can be litigated, and surely would be. Designation as a supply chain risk surely would be as well.
These court cases would produce bad outcomes either way. If the court finds for Anthropic, future DoD leadership will find itself constrained or at least chilled. Or if the court finds for the government, an expansive permissive view of the DPA might encourage future administrations to compel tech companies to make AIs break the law in other ways, for example by suppressing certain political points of view in output.
National defense is strongest if the military is extremely powerful but carefully judicious in the application of that power. That gives us the highest “top end” capability of performance. If military leadership insists on acting recklessly, then eventually guardrails are installed, with the result of a diminished ability to respond effectively to low-probability, high risk moments. One of many nuances and paradoxes the current political leadership does not seem to understand.