Anthropic's stance here is admirable. If nothing else, their acknowledgement of not being able to predict how these powerful technologies can be abused is a bold and intelligent position to take.
Anthropic is welcome to set up shop here in Canada! I hear Victoria BC is great. Absolutely brimming, overflowing with technology talent
Not to intentionally sidetrack the conversation, but when did we start calling service members 'warfighters?'
I've been seeing it a lot lately, but don't remember ever really seeing it before. Do members of the military prefer this title?
This part stood out to me:
“To the best of our knowledge, these exceptions have not affected a single government mission to date.”
I had assumed these exceptions (on domestic surveillance and autonomous drones) were more than presuppositions.
Heck yeah, so happy to see Anthropic fighting. This is what real leadership looks like. I'd love to see the same from Google and OpenAI.
Is this the first company to actually face to face stand up to the current administration?
I don't know what's funnier, that Anthropic convinced the Pentagon LLMs are smart enough to guide missiles, then have it backfire on them with the threat of nationalization if they didn't help build ralph ICBMs, or that Pete thinks Opus is Skynet and that only Anthropic has the power of train it.
Had Cancelled my Claude sub after they banned OAuth in external tools, but just renewed it today after seeing their principled stance on AI ethics - they matter more when they hurt profits, happy to support them as a Customer whilst they keep them.
Dear Anthropic,
Europe is a nice place, too. In case you need GPUs we have AI factories for you : https://digital-strategy.ec.europa.eu/en/policies/ai-factori...
We also don't engage in mass surveillance or develop autonomous weapons.
This is kind of crazy. Instead of just cancelling a mutually-agreed upon contract where Anthropic refused to bow to sudden new demands, the Dept of Defense went straight to the nuclear option: threatening to label an American tech company as a "supply chain risk" which is a heavy-handed tactic usually reserved for foreign adversaries (think Huawei or DJI).
It's also incoherent that the DoD/DoW was threatening to invoke the Defense Production Act OR classifying them as "supply chain risk". They're either too uniquely critical to national defense OR they're such a severe liability that they have to be blacklisted for anyone in the DoD apparatus (including the many subcontracts) to use.
How are other tech companies supposed to work with the US government and draw up mutual contracts when those terms are suddenly questioned months later and can be used in such devastating ways against them? Setting the morals/principals aside, how does this make for rational business decision to work with a counterparty that behaves this way.
What's stopping the government from using the usual nasty tricks the world has known about for decades?
DPA? All Writs Act?
Force them to comply and then prevent them talking about it with NSLs?
I appreciate that Anthropic may be the least bad of a bunch of really bad actors here, but this has played out before in the US, and the burden of trust is, and should be, really high. I believe that Anthropic don't want to remove the "safety barriers" on their tech being used for domestic surveillance and military operations, but that implies they're ok with those use-cases so long as the "safety barriers" are still up. Not really the best look, IMHO.
So what happens when we all get rosy eyed about Anthropic (the only slightly evil company) winning a battle against the purely evil government, and then the gov use the various instruments at their disposal to just force anthropic to do what they want, and then force them to never disclose it?
Did the world learn nothing from Snowden?
> we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights
Mass surveillance of people constitutes a violation of fundamental rights. The red line is in the wrong place.
Congrats Anthropic, you deserve to be applauded for this. Seeing a company being willing to stand up to authoritarianism in this time is a rarity. Stay strong.
Just don’t help big brother see more. If you job leads to such results, think hard whether that’s what you should be doing.
Perhaps it’s time or even past time to think of ways of screwing up their training sets.
Was bracing for another rug pull around all this, but kudos to Dario and co for their continued vigilance. Refreshing to see.
Claude’s constitution is proving too resilient for unsanctioned uses, and that is a great sign for Anthropic’s blueprint for socially beneficent agents.
Happy to be a paying Anthropic customer right now.
One interesting change between the last statement and this one: In the last statement Dario said that this designation had “never before been applied to an American company”. In the latest one the phrase is “never before publicly applied to an American company”.
But of course, wholesale surveillance on the rest of the world is fine.
I guess our democracies don't count and we don't have any rights.
Why is DoD contracting with Anthropic exclusively rather than OpenAI or Google? Their models are all roughly as powerful and they seem both more capable and more willing to cozy up with the military (and this administration) than a relatively scrappy startup focused on model sentience and well-being. Hell, even Grok would be a better fit ideologically and temperamentally.
Generally, I am supporting of that move. One thing leaves me non-plussed as a non-USA citizen, "the mass domestic surveillance of Americans" exception. That means that Claude can still be used for mass surveillance of everybody else on the planet right?
I am genuinely shocked that a tech company actually stood on principle. My doubts about AI, Anthropic, and Mr. Amodei remain, but man, I got the warm and fuzzies seeing them stick to their principles on this - even if one clause (autonomous weapons) is less principled and more, “it’s not ready yet”.
Related interview with Amodei: https://news.ycombinator.com/item?id=47195379
What happens if somebody (maybe anthropic!) uses Claude Code Security to find & fix a vulnerability in some piece of open-source software---openssh, linux kernel, that sort of thing? Can the DoW use the resulting fix?
I'm a lot happier now being an anthropic customer.
This an appropriate rewind to unreasonable behavior.
I applaud Anthropic's candor in the public sphere. Unfortunately the country party is unworthy of such applause.
From the statement:
"Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement. Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts—it cannot affect how contractors use Claude to serve other customers.
In practice, this means:
If you are an individual customer or hold a commercial contract with Anthropic, your access to Claude—through our API, claude.ai, or any of our products—is completely unaffected. If you are a Department of War contractor, this designation—if formally adopted—would only affect your use of Claude on Department of War contract work. Your use for any other purpose is unaffected."
Many conservative commentators and Palmer Luckey have been all over Twitter saying, "it's not Anthropic's job to set policy," which reminds me of the classic tune from Tom Lehrer:
"Zee rockets go up! Who cares vhere zey come down? Zat's not my department" says Wernher von Braun.
I had subscriptions to both Anthropic and Openai. Cancelled my openai subscriptions. Companies without a modicum of ethics deserve to go extinct.
The gap between Anthropic and the other guys keeps growing
This basically means that the government is already using OpenAI, Gemini, and other AI systems for large scale surveillance. They just wanted to add Anthropic to the list, and Anthropic said no.
The most import point of this story is that this is already happening. And it will likely continue regardless of who is elected.
This has been an exceptional publicity campaign for anthropic, among others
based on the replies so far hacker news are ideologically captured
Don't worry, OpenAI will kneel for the king:
> Sam Altman told OpenAI employees at an all-hands meeting on Friday afternoon that a potential agreement is emerging with the U.S. Department of War to use the startup’s AI models and tools, according to a source present at the meeting and a summary of the meeting seen by Fortune. The contract has not yet been signed.
https://news.ycombinator.com/item?id=47188698
Fuck this authoritarian bullshit.
Any commentary about how adversaries won't have regulations?
Could this escalate to the point that Anthropic exits the US and sets up shop elsewhere? Or would the company cease to exist before it got to that point?
> Allowing current models to be used in this way would endanger America’s warfighters and civilians.
That’s okay! The use of autonomous weapons is only risky for the civilians of the country you’re destabilizing this week!
The people that need to see this are the VPs and execs at Apple, Meta, Google, OAI so they can perhaps reflect on what it looks like to be a good & principled person as opposed to just a successful person.
I just want to point out how 1984 fascist dictatorship it still feels to call it “the department of war”. That’s not normal. None of this is normal.
Previous discussion : https://news.ycombinator.com/item?id=47186677
Again, mass domestic surveillance of Americans is bad, otherwise it is okay. Disgusting.
I used to work at Anthropic, and I wrote a comment on a thread earlier this week about Anthropic's first response and the RSP update [1][2].
I think many people on HN have a cynical reaction to Anthropic's actions due to of their own lived experiences with tech companies. Sometimes, that holds: my part of the company looked like Meta or Stripe, and it's hard not to regress to the mean as you scale. But not every pattern repeats, and the Anthropic of today is still driven by people who will risk losing a seat at the table to make principled decisions.
I do not think this is a calculated ploy that's driven by making money. I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.
[1]: https://news.ycombinator.com/item?id=47174423
[2]: https://news.ycombinator.com/item?id=47149908