logoalt Hacker News

The Pentagon Threatens Anthropic

129 pointsby lukeplatotoday at 5:49 PM92 commentsview on HN

Comments

emsigntoday at 6:39 PM

So the Pentagon is strongarming a company into cooperation? That reminds of how my alcoholic neighbor used to treat his family. It's almost as if someone let a mean drunk be in charge of the Pentagon.

show 5 replies
unyttigfjelltoltoday at 7:01 PM

Techno futurist:

1. Builds tool extremely capable of mass surveillance and running autonomous warfighting capabilities.

2. Expresses shock — shock — when the Department of War insists on using the tool for mass surveillance and autonomous warfighting systems.

show 5 replies
xiphias2today at 7:33 PM

,,Needless to say, I support Anthropic here. I’m a sensible moderate on the killbot issue (we’ll probably get them eventually, and I doubt they’ll make things much worse compared to AI “only” having unfettered access to every Internet-enabled computer in the world). But AI-enabled mass surveillance of US citizens seems like the sort of thing we should at least have a chance to think over, rather than demanding it from the get-go.''

Why would killbots be sensible moderate with the number of hallucinations LLMs have right now?

They just need to have one rm -rf bug somewhere to so something disasterous, and at least Antrhopic's CEO understands the limitations of the software.

show 1 reply
binktoday at 6:48 PM

Imagine a world where in order to do business in the US you must grant the government control of your company. This sounds worse than even the most alarmist China takes.

show 4 replies
7777777philtoday at 6:41 PM

sing the "supply chain risk" designation against a domestic AI company is wild. Not sure that tool had vendors who won't rewrite their ToS on demand in mind.

Meanwhile the Pentagon could just build its own capacity. Commercial AI outspends federal science R&D 75:1 right now.

vonneumannstantoday at 6:35 PM

Point blank one of the most nakedly evil things the government has ever tried to do. Apparently Anthropic's sticking points were no using the model for autonomous kill orders and no mass surveillance...

show 3 replies
ks2048today at 7:43 PM

Big Tech: you can just do things.

Corrupt, evil Government: OK.

csourstoday at 7:27 PM

this pairs nicely with the finding of the supreme court:

    Under our constitutional structure of separated powers, the nature of Presidential power entitles a former President to absolute immunity from criminal prosecution for actions within his conclusive and preclusive constitutional authority. And he is entitled to at least presumptive immunity from prosecution for all his official acts. There is no immunity for unofficial acts.
https://www.supremecourt.gov/opinions/23pdf/23-939_e2pg.pdf
show 1 reply
mayhemduckstoday at 7:25 PM

I'm really not understanding this. Doesn't the typical path for advanced technology making it into the hands of civilians start with military applications and end with it being modified for civilian use?

If the Pentagon wants Anthropic's technology because it has desirable characteristics, can it not just train its own AI models? Why can't the Pentagon build data centers full of GPUs and hire some smart people like the commercial AI providers did?

Why in this case, has the usual path for technology been flipped? Starting out as commercial tech for civilians, and then being re-purposed for military use feels unusual to me. Maybe Hegseth's "War department" has a recruiting problem.

show 1 reply
dqvtoday at 6:55 PM

> anyone know what news it was reacting to?

Probably this https://time.com/7380854/exclusive-anthropic-drops-flagship-...

godelskitoday at 8:11 PM

There's a lot of talk about "Future Claude", even Karpathy has mentioned something similar. But does anyone stop to think about how utterly dystopian this is?

We are creating a worse version of the Panopticon than was originally designed. A Panopticon that could have entirely devastating consequences. Not only is "the guard" able to see what any given "prisoner" is doing at any time, but they can look into the past. The self-regulation happens because the prisoners could be being watched. It is Orwellian. But this thing we're building? It can look at the prisoners' actions before it was even completed.

I think people don't think about this enough. Culture changes and in that time what is considered morally justifiable or even reasonable changes. Sometimes it is easy to judge people in the past by our current standards but other times it is not. Other times there is context needed, which is lost not only by time but in what is never recorded. How do prisoners self-regulate to future values that they do not know they are supposed to align to?

This creates a terrible machine where whoever controls it will likely have the power to prosecute anyone arbitrarily. Get the morals to change just slightly or just take things out of context and you have the public demanding prosecution. I think people think this seems far fetched but I'm willing to bet every single person on HN has fallen for some disinformation campaign. Be it the "carrots help you see in the dark", peoples misunderstanding between paper/plastic/canvas tote bags, a wide variety of topics related to environmentalism, and on and on. Even if you believe you have never fallen for such a disinformation (or malinformation) campaign, you'll have to concede that it is common for others to. That's all that is needed for someone in power to execute on this Panopticon, and it is a strategy people with power have been refining for thousands of years.

I really do support Anthropic pushing back here, but the discussions about "Future Claude" really are unsettling. It is like we are treating this as an inevitability. As if we have no choice in the matter. If that is true, then we are the mindless automata and then what does the military need killer-bots for? The would already have them.

[0] https://en.wikipedia.org/wiki/Panopticon

kittikittitoday at 7:41 PM

This is going to be a controversial take but I don't agree with Anthropic on this one. My gut instinct says that the Pentagon should back down, but my gut is wrong because of political bias. I can't claim to be serious about AI governance if Anthropic is able to sidestep the interests of the Pentagon, whoever might be in charge. Anthropic is not stronger than the US government, and it would set a dangerous precedent if they don't comply.

At the end of the rabbit hole, it's all about enforcement, regardless of the contract. Who's going to enforce Anthropic's terms and conditions if they betray the Pentagon?

show 4 replies
fogzentoday at 7:24 PM

I can't help but compare what happened with nuclear physics to what will happen with ASI/AGI. We could have used nuclear energy to provide abundant, clean energy. Instead we used it for warfare to kill people. All the of the brightest minds and frontier technology was directed towards killing people.

We could use AI for medical advances and to create a communist utopia without serfdom. But it's already looking like we're getting killer robots and more oppression.

Hope I'm thinking about this wrong. I fear very soon the government will begin nationalizing AI resources and forcing AI researchers to direct their efforts towards weapons systems. Similar to what happened in physics. "We have to be first to have autonomous robot armies" basically.

show 2 replies
Jamesbeamtoday at 7:20 PM

Might be a long stretch, but that every analyst I’ve heard talking about this is concerned about mass surveillance of us citizens again, and the Wyden Siren is hinting at illegal activities by the CIA.

https://www.wyden.senate.gov/imo/media/doc/wyden_letter_to_d...

Plus that the US military also used anthropics products in some form during the Venezuela operation as they publicly acknowledged, plus Hegseth seeming to be willing to put the boot down anthropics’ neck according to the options presented to them, are a lot of interesting things that happened in a very short amount of time for an environment that is usually known to work as frictionless as possible.

Even for Hegseth this is a lot of public eyes on something the pentagon of previous administrations would have handled probably with the same willingness to drown anthropic in their own tears but completely out of public sight.

But the Pentagon works in mysterious ways, and therefore there might be a very good reason for this kind of pressure, that the people who are responsible for national security even risk making a public fuss about it, that we peasants simply don’t see.

I also can’t wait to see how the us military is messing this whole AI superiority softporn up. It’s not a matter of if but only of when.

They have a track record misshandling weapons of mass destruction.

https://www.atomicarchive.com/almanac/broken-arrows/index.ht...

To be fair tho, for the amount of nuclear weapons they are handling overall they are doing a pretty good job. But no more open blast doors for the pizza delivery guy, ok?

The real question is how many broken arrow events can we even have with AI? Is it better luck next time baby skynet serious or we fucked up Sir, everyone is going to die as matchsticks bad, if whatever system they use decides every problem they throw at it can be solved by removing the human from the equation, all of them preferably.

bediger4000today at 6:47 PM

How does Hegseth believe he's going to out maneuver the company with the best "AI" on earth? Anthropic will run circles around him.

show 4 replies
IG_Semmelweisstoday at 6:52 PM

I understand that Anthropic has one of the most popular products in the market.

But no one, especially the government, should get in bed with them, when anthropic leadership has a track record trying to use their early mover advantace, to effectively create an AI cartel [1]

I'm glad Anthropic is getting a taste of their own medicine.

[1] https://www.bloomberg.com/opinion/articles/2025-10-15/anthro...

show 2 replies
oceanplexiantoday at 6:57 PM

Anthropic cutting off the Pentagon is saying in no uncertain terms that they support allowing the PRC access to frontier military technology but not the US.

show 3 replies