logoalt Hacker News

Statement from Dario Amodei on Our Discussions with the Department of War

895 pointsby qwertoxyesterday at 10:42 PM501 commentsview on HN

Comments

jjcmyesterday at 11:55 PM

This is the strongest statement in the post:

> They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

This contradictory messaging puts to rest any doubt that this is a strong arm by the governemnt to allow any use. I really like Anthropic's approach here, which is to in turn state that they're happy to help the Governemnt move off of Anthropic. It's a messaging ploy for sure, but it puts the ball in the current administration's court.

show 7 replies
lebovictoday at 12:21 AM

I used to work at Anthropic, and I wrote a comment on a thread earlier this week about the RSP update [1]. I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals. I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)

That doesn't mean that I always agree with their decisions, and it doesn't mean that Anthropic is a perfect company. Many groups that are driven by ideals have still committed horrible acts.

But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.

[1]: https://news.ycombinator.com/item?id=47145963#47149908

show 7 replies
qaidyesterday at 11:20 PM

I was reading halfway thru and one line struck a nerve with me:

> But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.

So not today, but the door is open for this after AI systems have gathered enough "training data"?

Then I re-read the previous paragraph and realized it's specifically only criticizing

> AI-driven domestic mass surveillance

And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance

A real shame. I thought "Anthropic" was about being concerned about humans, and not "My people" vs. "Your people." But I suppose I should have expected all of this from a public statement about discussions with the Department of War

show 3 replies
tabbottyesterday at 11:43 PM

An organization character really shows through when their values conflict with their self-interest.

It's inspiring to see that Anthropic is capable of taking a principled stand, despite having raised a fortune in venture capital.

I don't think a lot of companies would have made this choice. I wish them the very best of luck in weathering the consequences of their courage.

show 1 reply
flumpcakesyesterday at 11:04 PM

This is such a depressing read. What is becoming of the USA? Let's hope sanity prevails and the next election cycle can bring in some competent non-grievance based leadership.

show 5 replies
helaobanyesterday at 11:34 PM

All of these problems are downstream of the Congress having thoroughly abdicated its powers to the executive.

The military should be reigned in at the legislative level, by constraining what it can and cannot do under law. Popular action is the only way to make that happen. Energy directed anywhere else is a waste.

Private corporations should never be allowed to dictate how the military acts. Such a thought would be unbearable if it weren't laughably impossible. The technology can just be requisitioned, there is nothing a corporation or a private individual can do about that. Or the models could be developed internally, after having requisitioned the data centers.

To watch CEOs of private corporations being mythologized for something that a) they should never be able to do and b) are incapable of doing is a testament to how distorted our picture of reality has become.

zb1plustoday at 2:00 AM

It would be hilarious if the Europeans got everyone visas and gave some kind of tax benefit to Anthropic and poached the entire company.

nkorenyesterday at 11:18 PM

This makes me a very happy Claude Max subscriber.

Finally, someone of consequence not kissing the ring. I hope this gives others courage to do the same.

atleastoptimalyesterday at 11:56 PM

I was concerned originally when I heard that Anthropic, who often professed to being the "good guy" AI company who would always prioritize human welfare, opted to sell priority access to their models to the Pentagon in the first place.

The devil's advocate position in their favor I imagine would be that they believe some AI lab would inevitably be the one to serve the military industrial complex, and overall it's better that the one with the most inflexible moral code be the one to do it.

show 3 replies
alangibsonyesterday at 10:54 PM

It's not named the Department of War because Congress didn't rename it.

Other than that, good on ya.

show 6 replies
freakynittoday at 1:59 AM

Welp, I never thought "Person of Interest" show coming to life anytime soon, but, here we are. In case you haven't watched the show, it's time to give it a go. Bare with season 2 though, since things really start to escalate from season 3 onwards. Season 1 is a must though.

gdiamostoday at 2:13 AM

This is why I like Dario as a CEO - he has a system of ethics that is not jus about who writes the largest check.

You may not agree with it, but I appreciate that it exists.

danbrooksyesterday at 11:01 PM

Props to Dario and Anthropic for taking a moral stand. A rarity in tech these days.

show 5 replies
ApolloFortyNineyesterday at 11:11 PM

Idk if the reporting was just biased before, but from what I saw is that this time last week, it was thought you couldn't use Anthropic to bring about harm, and now they're making it clear that they just don't want it used domestically and not fully autonomously.

Like maybe it always was just this, but I feel every article I read, regardless of the spin angle, implied do no harm was pretty much one of the rules.

show 1 reply
kace91yesterday at 11:17 PM

As someone who is potentially their client and not domestic, really reassuring that they have no concerns with mass spying peaceful citizens of my particular corner of the world.

show 1 reply
Metacelsusyesterday at 11:02 PM

I'm glad to see Dario and Anthropic showing some spine! A lot of other people would have caved.

adamgoodapptoday at 2:14 AM

It's ok to mass survey foreign entities.

asmoryesterday at 11:07 PM

As a "foreign national", what's the deal with making the distinction between domestic mass surveillance and foreign mass surveillance? Are there no democracies aside from the US? Don't we know since Snowden that if the US wants to do domestic surveillance they'll just ask GCHQ to share their "foreign" surveillance capabilities?

show 17 replies
oxqbldpxotoday at 12:50 AM

It may sound crazy, but they should just move the company to Europe or Canada, instead of putting up with this.

show 1 reply
mooglevichtoday at 1:57 AM

"You are what you won't do for money." is a quote that seems apt here. Anthropic might not be a perfect company (none are, really), but I respect the stance being taken here.

ramozyesterday at 11:36 PM

All completely rationale. Makes the us military here look fairly incompetent… embarrassing as a veteran.

zmmmmmtoday at 1:20 AM

I can't help but highlight the problem that is created by the renaming of the Deptartment of Defense to the Department of War:

> importance of using AI to defend the United States

> Anthropic has therefore worked proactively to deploy our models to the Department of War

So you believe in helping to defend the United States, but you gave the models to the Department of War - explicitly, a government arm now named as inclusive of a actions of a pure offensive capability with no defensive element.

You don't have to argue that you are not supporting the defense of the US by declining to engage with the Department of War. That should be the end of the discussion here.

kumarvvrtoday at 1:12 AM

All this is for nought.

The power lies with the US Govt.

And its corrupt, immoral and unethical, run by power hungry assholes who are not being held accountable, headed by the asshole who does a million illegal things every day.

Ultimately, Anthropic will fold.

All this is to show to their investors that they tried everything they could.

show 1 reply
DaedalusIItoday at 1:44 AM

They made it easy to generate powerpoint presentations, that is the real reason DoW is using them

this is a very chauvinistic approach... why not another model replace anthropic here? I sense because gov people like using excel plugin and font has nice feel. a few more week of this and xAI is new gov AI tool

sbinneetoday at 1:12 AM

As a non US citizen, this article sounds mildly concerning to me. My country is an ally of US. Good. But I don't know how I would feel when I start seeing Anthropic logos on every weapon we buy from US.

Aside my concern, Dario Amodei seems really into politics. I have read a couple of his blog posts and listened to a couple of podcast interviews here and there. Every time I felt like he sounded more like a politician than an entrepreneur.

I know Anthropic is particularly more mission-driven than, say OpenAI. And I respect that their constitutional ways of training and serving Claude models. Claude turned out to be a great success. But reading a manifest speaking of wars and their missions, it gives me chills.

altpaddleyesterday at 11:25 PM

Props to Dario and Anthropic for holding firm on these two points that I feel like should be a no-brainer

atleastoptimalyesterday at 11:59 PM

I was concerned originally when I heard that Anthropic, who often professed to being the "good guy" AI company who would always prioritize human welfare, opted to sell priority access to their models to the Pentagon in the first place.

The devil's advocate position in their favor I imagine would be that they believe some AI lab would inevitably be the one to serve the military industrial complex, and overall it's better that the one with the most inflexible moral code be the one to do it.

siliconc0wtoday at 2:09 AM

Good to them standing up to this administration. I doubt they actually want to put Claude in the kill-chain but this gives them a nice opportunity to go after 'woke AI' and maybe internal ammunition to go through the switching costs for xAI - given Elon more reason to line republican campaign coffers.

I'm guessing this is because Anthropic partners with Google Cloud which has the necessary controls for military workloads while xAI runs in hastily constructed datacenter mounted on trucks or whatever to skirt environmental laws.

rayesterday at 11:07 PM

> "mass domestic surveillance" - mass surveillance of non-domestic civilians is OK?

show 1 reply
2001zhaozhaotoday at 1:28 AM

Congratulations, you just got a new $200 Claude Max plan customer.

mrcwinntoday at 2:20 AM

Keep in mind: the government is very invested logistically in Anthropic.

So no matter what xAI or OpenAI say - if and when they replace that spend - know that they are lying. They would have caved to the DoW’s demands for mass surveillance.

Because if there were some kind of concession, it would have been simplest just to work with Anthropic.

Delete ChatGPT and Grok.

dakollitoday at 2:14 AM

This is a PR play by Anthropic, likely in coordination with the administration. They don't care, they just need the public to view them as a victim here, and then its business as usual.

I prefer they get shutdown, llms are the worst thing to happen to society since the nuclear bomb's invention. People all around me are losing their ability to think, write and plan at an extraordinary pace. Keep frying your brains with the most useless tool alive.

Remember, the person that showed their work on their math test in detail is doing 10x better than the guys who only knew how to use the calculator. Now imagine being the guy who thinks you don't need to know the math or how to use a calculator lol.

geophiletoday at 12:55 AM

I think it’s a pretty strong statement. It is unfortunately weakened by going along with the “Department of War” propaganda. I believe that the name is “Department of Defense” until Congress says otherwise, no matter what the Felon in Chief says.

noupdatestoday at 1:04 AM

Why would the US security apparatus outsource the model to a private company? DARPA or whatever should be able to finance a frontier model and do whatever they want.

SamDc73today at 1:37 AM

Didn't Dario Amodei ask for more government intervention regarding AI?

mvkelyesterday at 11:11 PM

Good optics, but ultimately fruitless.

If preventing mass surveillance or fully autonomous weaponry is a -policy- choice and not a technical impossibility, this just opens the door for the department of war to exploit backdoors, and anthropic (or any ai company) can in good conscience say "Our systems were unknowingly used for mass surveillance," allowing them to save face.

The only solution is to make it technically -impossible- to apply AI in these ways, much like Apple has done. They can't be forced to compel with any government, because they don't have the keys.

show 1 reply
muglugtoday at 12:10 AM

OpenAI and Google could have decided to make the same principled stand, and the government would have likely capitulated.

show 1 reply
dylan604yesterday at 11:03 PM

"I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries."

That opening line is one hell of a set up. The current administration is doing everything it can to become autocratic thereby setting themselves up to be adversarial to Anthropic, which is pretty much the point of the rest of the blog. I guess I'm just surprised to have such a succinct opening instead just slop.

mrcwinntoday at 2:09 AM

I am incredibly proud to be a customer, both consumer level and as a business, of Anthropic and have canceled my OpenAI subscription and deleted ChatGPT.

alach11yesterday at 11:15 PM

A significant part of Anthropic's cachet as an employer is the ethical stance they profess to take. This is no doubt a tough spot to be in, but it's hard to see Dario making any other decision here.

What I don't understand is why Hegseth pushed the issue to an ultimatum like this. They say they're not trying to use Claude for domestic mass surveillance or autonomous weapons. If so, what does the Department of War have to gain from this fight?

show 2 replies
joshAgtoday at 1:43 AM

torment nexus creators are shocked, appalled even, to discover that people desire to use it to torment others at nearby nexus

dzongatoday at 1:17 AM

these guys are selling snake oil to the gvt - cz they know they can get cash based on fear.

the Chinese are releasing equivalent models for free or super cheap.

AI costs / energy costs keep going up for American A.I companies

while china benefits from lower costs

so yeah you've to spread F.U.D to survive

show 1 reply
protocoltureyesterday at 11:39 PM

Classic seppo diatribe.

"We will build tools to hurt other people but become all flustered when they are used locally"

jwpapitoday at 1:52 AM

Am i the only one who understands the deparments position? Like if another country will have it without safeguards, why would I not want it without safeguards. I can still be the safeguard, but having safeguards enforced by another entity that potentially has to face negative financial consequences seems like a disadvantage, would be weird to accept that as department of war.

I understand the risk, but that is the pill.

anduril22today at 12:18 AM

Powerful post - good on him for taking a stand, but questionable in light of their recent move away from safeguards for competitive reasons.

maxdotoday at 12:09 AM

Ukraine , Russia , China , actively develop ai systems that kill. Not developing such system by US based company will not change the course of actions.

show 1 reply
michaellee8yesterday at 11:04 PM

Probably not a good idea to let Claude vibe-selecting targets, it still sometime hallucinates

show 1 reply
huslagetoday at 12:41 AM

It is not the Department of War. He's towing the line from the get-go. Forget this guy.

🔗 View 48 more comments