Domestic mass surveillance might feel tolerable when you live in the country conducting it. But how would you feel about other countries adopting similar policies, and thereby mass-surveilling the American people? Because that's exactly what these policies authorize when applied to the rest of the world.
Among other consequences, if Anthropic ends up being killed it’s going to be just another nail in the coffin of trust in America.
Companies who subscribed will find themselves without an important tool because the president went on a rant, and might wonder if it’s safe to depend on other American companies.
The problem with forcing public policy on companies is that companies are ultimately made from individuals, and surely you can’t force public policy down people’s throats.
I’m sure nothing good can come out of strong-arming some of the brightest scientists and engineers the U.S. has. Such a waste of talent trying to make them bend over to the government’s wishes… instead of actually fostering innovation in the very competitive AI industry.
> We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.
This is a trap. Two, I guess, but let's take the first one:
Domestic mass surveillance. Domestic.
Remember the eyes agreements: https://www.perplexity.ai/search/are-the-eyes-agreements-abo...
Expanding:
> These pacts enable member countries to share signals intelligence (SIGINT), including surveillance data gathered globally. Disclosures, notably from Edward Snowden in 2013, revealed that allies intentionally collect data on each other's citizens - bypassing domestic restrictions like the US ban on NSA spying on Americans - then exchange it.
Banning domestic mass surveillance is irrelevant.
The eyes-agreements allow them (respective participating countries) to share data with each other. Every country spies on every other country, with every country telling every other country what they have gathered.
This renders laws, which are preventing The State from spying on its own citizens, as irrelevant. They serve the purpose of being evidence of mass manipulation.
The USA showed itself to be a Command Economy that uses 'private enterprise' as a fascade of legitimacy during Covid. Without government spending, employment, and contracts, the USA would be net negative growth.
Now the DoD, who are by far the largest budgetary expense for the tax payer, wants us to believe they don't have a better Ai than current industry? That is a double sword admission; either they are exposing themselves again as economic decision makers, or admitting they spend money on routine BS with zero frontier war fighting capabilities.
Either way, it is beyond time to reform the Military and remove the majority of its leadership as incompetent stewards and strategists. That doesn't even include the massive security vulnerabilities in our supply chains given military needs in various countries. (Taiwan and Thailand)
Here's the sequence (so far) in reverse order - did I miss any important threads?
Statement on the comments from Secretary of War Pete Hegseth - https://news.ycombinator.com/item?id=47188697 - Feb 2026 (31 comments)
I am directing the Department of War to designate Anthropic a supply-chain risk - https://news.ycombinator.com/item?id=47186677 - Feb 2026 (872 comments)
President Trump bans Anthropic from use in government systems - https://news.ycombinator.com/item?id=47186031 - Feb 2026 (111 comments)
Google workers seek 'red lines' on military A.I., echoing Anthropic - https://news.ycombinator.com/item?id=47175931 - Feb 2026 (132 comments)
Statement from Dario Amodei on our discussions with the Department of War - https://news.ycombinator.com/item?id=47173121 - Feb 2026 (1527 comments)
The Pentagon Feuding with an AI Company Is a Bad Sign - https://news.ycombinator.com/item?id=47168165 - Feb 2026 (33 comments)
Tech companies shouldn't be bullied into doing surveillance - https://news.ycombinator.com/item?id=47160226 - Feb 2026 (157 comments)
The Pentagon threatens Anthropic - https://news.ycombinator.com/item?id=47154983 - Feb 2026 (125 comments)
US Military leaders meet with Anthropic to argue against Claude safeguards - https://news.ycombinator.com/item?id=47145551 - Feb 2026 (99 comments)
Hegseth gives Anthropic until Friday to back down on AI safeguards - https://news.ycombinator.com/item?id=47142587 - Feb 2026 (128 comments)
The talk about declaring anthropic a supply chain security risk (which doesn’t just remove it from DoW but also all the contractors and suppliers that supply DoW) was also accompanied by a completely different threat: to declare it national security need to take over then company.
Prediction: in time, OpenAI will be declared such to privatise profits but socialise losses
> We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.
Hope is neat, but are the signatories willing to quit their jobs over this? Kind of a hollow threat if not.
"We hope our leaders will..." I realize things are moving quickly, and the stakes are high here, but thinking about what happens if the hopes are not met might be a next step.
The signatories of this (letter, petition, whatever) are the same folks who profit from creating this Pandora's Box. If you don't want it opened, stop making it?
What is this supposed to do? OpenAI is already cozied up and in bed with Dept of War, they're already busy making lots of little surveillance babies.
I've gathered that the dispute is over Anthropic's two red lines: mass surveillance and fully autonomous weapons. Is there any information (or rumors even) about what the specific request was? I can't believe the government would be escalating this hard over "we might want to do autonomous weapons in the vague, distant future" without a concrete, immediate request that Anthropic was denying.
Even if there was a desire for autonomous weapons (beyond what Anduril is already developing), I would think it would go through a standard defense procurement procedure, and the AI would be one of many components that a contractor would then try to build. It would have nothing to do with the existing contract between Anthropic and the Dept of War.
What, then, is this really about?
This reminds me a bit of the Black Mirror episodes with the bees. Where the people whose names tweeted something were actually the targets...
Before you leave a comment about how meaningless this is unless they do XYZ,
please realize that there's likely a group chat out there somewhere where all of these concerns have already been raised and considered. The best thing you can do is ask how you as an outsider can help support these organizers
The best way for AI companies to fight this would be to remind those who request this capability that the AI knows exactly where they live, where they hang out, and that any one of them can also be targeted by a rogue AI system with no human in the loop. Capabilities that they are requesting could jeopardize them, their personal assets, and their families if something goes haywire or, in the much more common case, where the AI is used as an attack tool by an outside adversary who has gained unauthorized access.
All of this should remain a bridge too far, forever.
EDIT: It is one level of bad when someone hacks a database containing personal healthcare data on most Americans as happened not long ago. A few years back, the OPM hack gave them all they needed to know about then-current and former government employees and service members and their families. Wait until a state-sponsored actor finds their way into the surveillance and targeting software and uses that back door to eliminate key adversarial personnel or to hold them hostage with threats against the things they value most so that the adversary builds a collection of moles who sell out everything in a vain attempt to keep themselves safe.
Of course we already know what happens when an adversary employs these techniques and that is why we are where we are right now.
Yeah, I guess OpenAI is so upset with the Department of War that they signed a deal with it! Hypocrisy all around. https://x.com/grok/status/2027769947913425390?s=20
Why are the signing employees (at least the anonymous ones) trusting the creators of this website? What if it was set up by someone who wanted to gather a list of all the dissidents who would silently protest or leave the companies or whatever? Do you know whom you are going to hold accountable if it turns out these folks don't delete your verification data, or share it with your employer, or worse?
Also, another warning to anonymous users: it's a little bit naive to trust the "Google Forms" verification option more than the email one, given both employers probably monitor anything you do on your devices, even if it's loading the form. And, in Google's case, they could obviously see what forms you submitted on the servers, too. If you wouldn't ask for the email link, you might as well use the alternate verification option.
Anyway - I'm not claiming it's likely that the website creator is malicious, but surely it's not beyond question? The website authors don't even seem to be providing others with the verification that they are themselves asking for.
P.S. I fully realize realizing these itself might make fewer people sign the form, which may be unfortunate, but it seems worth a mention.
I am not a fan of Anthropic guys, but this time I stand with it. We all should.
» Have there been any mistakes in signature verification for this letter?
» We are aware of two mistakes in our efforts to verify the signatures in the form so far. One person who was not an employee of OpenAI or Google found a bug in our verification system and signed falsely under the name "You guys are letting China Win". This was noticed and fixed in under 10 minutes, and the verification system was improved to prevent mistakes like this from happening again. We also had two people submit twice in a way that our automatic de-duplication didn't catch. We do periodic checks for this. Because of anonymity considerations, all signatures are manually reviewed by one fallible human. We do our best to make sure we catch and correct any mistakes, but we are not perfect and will probably make mistakes. We will log those mistakes here as we find them.
This should be flagged political like literally everything else that has been flagged ironic how when your on the menu you dont follow the same protocols applied everyone else too.
I only say this because this is not new behavior for the administration its been reported here on HN and in less biased and political ways but ends up suppressed just confused what changed?
Edit just to be clear this shouldn't be flagged and posts they deal with rights in the past shouldn't have been flagged because rights should be the preeminent concern of anyone in tech
Nicely done. Hold this line — there’s got to be one somewhere.
This is why you can't gatekeep AI capabilities. It will eventually be taken from you by force.
It's time to open-source everything. Papers, code, weights, financial records. Do all of your research in the open. Run 100% transparent labs so that there's nothing to take from you. Level the playing field for good and bad actors alike, otherwise the bad actors will get their hands on it while everyone else is left behind. Start a movement to make fully transparent AI labs the worldwide norm, and any org that doesn't cooperate is immediately boycotted.
Stop comparing AI capabilities to nuclear weapons. A nuke cannot protect against or reverse the damage of another nuke. AI capabilities are not like nukes. General intelligence should not be in the hands of a few. Give it to everyone and the good will prevail.
Build a world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human, not a corporation or government (which are machiavellian out of necessity). This is humanity's best chance at survival.
I'd prefer to see board (or executive) level signatories over lay employees -- the people who can enforce enterprise policy rather than just voice their opinions -- but this is encouraging to see nonetheless.
I can't help but notice that Grok/X is not part of this initiative, though. I realize that frontier models are really coming from Anthropic, OpenAI, and Google, but it feels like someone is going to give in to these demands.
It's incredible how quickly we've devolved into full-blown sci-fi dystopia.
It's like watching Darth Vader Senior fight Darth Vader Junior and luke skywalker is nowhere in sight.
You can’t be silly enough to build a product that enables things like mass surveillance to proliferate and then try to take a stance of “please don’t use it like that”. You invented a genie and let him out of the bottle.
If the DoW/DoD wants Anthropic, they'll get Anthropic, whether we know about it publicly or not. It's not unlikely that they're already working together and just putting on a show for the public.
I'd even go as far to say that if this is indeed a publicity campaign it is the most successful one I've seen in years. Many detractors of the existence of LLMs are suddenly leaping to Anthropic's defence.
I clearly see the point against using AI for mass surveillance and fully autonomous weapons. But for the latter, I don't see a choice. If other countries are willing to allow fully autonomous weapons using their own AI, it's no longer a matter of choice, you have to do it too.
Yes, take disparate sets of employees and like, oh idk unionize while you still have power.
The primary purpose of these products is mass surveillance why else would they be allowed to be built ?
For all the authoritarian regime talk. Here we have a list of many non-citizens willing to argue with the secretary of war of a country they are a temporary resident of, with no concern of repercussion.
I think the time when engineers could steer the heading of the companies they work for is long gone, sadly.
It’s too little too late. Don’t be evil is not a value anyone is even pretending to uphold.
I’d rather someone of these very smart people start to develop countermeasures.
This seems squarely within the purpose of the Defense Production Act: https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950
"Title I authorizes the President to identify specific goods as 'critical and strategic' and to require private businesses to accept and prioritize contracts for these materials."
If you invented a new kind of power source, and the government determined that it could be used to efficiently kill enemies, the government could force you to provide the product to them under the DPA. Why should AI companies get an exemption to that?
Sadly didn’t age well - OpenAI enthusiastically caved
Wouldn’t it be ironic if US used open source Chinese models for domestic mass surveillance and autonomously killing people without human oversight… democracy at its best.
Cute, I will also sign this since there are only upsides of Good optics and no downsides Let me know when any of them resigns after the companies do inevitably take the million dollar contracts.
This is a nice gesture but completely meaningless. There is absolutely no commitment in this. "We hope our leaders.." has no conditions, no effects.
If you're an employee and actually believe in this you need to commit to something, like resigning.
The book "On Tyranny: 20 lessons from the 20th century" by the historian Timothy Snyder is an excellent read for these times. The very first lesson is "Do not obey in advance". It's about how authoritarian power often doesn't need to force compliance, people simply bend the knee in anticipation of being forced. This simply emboldens the authoritarians to go further.
I've been disappointed to see many businesses and institutions obeying in advance recently. I hope this moment wakes up the tech community and beyond.
More like “you have been divided” — OpenAI
Just one thing - unless you're at a principal level or higher, don't quit as long as your conscience can take it. You'll be replaced by 10 other people overnight.
> They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand.
Prisoner's Dilemma in Action!
The regulatory environment in the US is insane
HN should apply their flagging of posts consistently. either flag the politics or not at all.
How come this is signed by OpenAI engineers while OpenAI participates in it with DoW? https://x.com/sama/status/2027578652477821175
My take is that none of the AI companies really care (companies can't care), they just realize that if they go down that road, public opinion will be so vehemently against AI in all forms that it will be regulated out of viability by the electorate.
Also, if AI exists, AI will be used for war. The AI company employees are kidding themselves if they think otherwise, and yet they are still building it (as opposed to resigning and working on something else), because in the end, money is the only true God in this world.
I know it is a serious topic, but before I clicked on it, I assumed this was going to be about Prime numbers...
Maybe it can get reused after this stuff is over.
Please take this question at face value. I tend to be slightly pro defense department in this context, but it is not a strongly held belief.
What I have known is that since its very inception, Google has been doing massive amounts of business with the war department. What makes this particular contract different? I really am trying to understand why these sentiments now.
This was a brave, heartwarming read. Thank you to the teams
The bravery of the people signing this anonymously is inspiring.
These 2 Exceptions shouldn't have to be disputed.
At this point I'd go far to say I wouldn't trust any company with my AI history that caves to DoD demands for mass domestic surveillance or fully autonomous weapons.
Your AI will know more about you than any other company, not going to be trusting that to anyone who trades ethics for profits.
This has much broader implications for the US economy and rule of law in the US.
If government procurement rules intended for national security risks can be abused as a way to punish Anthropic for perceived lack of loyalty, why not any other company that displeases the administration like Apple or Amazon?
This marks an important turning point for the US.