logoalt Hacker News

We Will Not Be Divided

2535 pointsby BloondAndDoomyesterday at 12:54 AM801 commentsview on HN

Comments

mellosoulsyesterday at 9:37 AM

"Domestic".

Very disappointing the letter signatories have chosen to reinforce the US-centric idea that using the models to spy on other democracies is fine and dandy.

Altman and senior others names notable by their absence; not unexpected given the quickly following apparent submission to DoW, which leaves the signatories here (while well-intentioned) in exposed ethical positions now.

naileryesterday at 4:35 PM

All that will happen as a result of US companies not willing to work on weapons is that the US will be made more vulnerable to adversaries, particularly the CCP who don’t care about these things.

shevy-javayesterday at 11:32 AM

"We are the employees of Google and OpenAI, two of the top AI companies in the world."

Well, good luck to them, but the state can control from top-down via laws, so they WILL eventually abuse people and violate their rights by proxy-force. I would not trust any of them with my data.

plucyesterday at 12:44 PM

They have now deleted/hid all signatures because their corpodaddy went the other way.

This is so great.

gurumeditationsyesterday at 3:57 PM

The “Department of War” DOES NOT EXIST.

naileryesterday at 2:48 PM

From the HN Guidelines:

> Please don't use Hacker News for political or ideological battle. It tramples curiosity.

renewiltordyesterday at 2:41 AM

Well, it looks like OpenAI will be working with the Pentagon: https://www.axios.com/2026/02/27/pentagon-openai-safety-red-...

My personal guess is that Sam Altman said he'd let policy violations go without a complaint and Dario Amodei said he wouldn't.

show 1 reply
drcongoyesterday at 1:41 PM

If I was Anthropic, I'd be saving this as a list of potential hires who share the company's values and shortlisting some to call up on Monday morning.

surumeyesterday at 1:25 PM

You must follow the law in your home country. Your refusal to do so constitutes Treason. Obey the law.

kittikittiyesterday at 12:21 PM

I respect this and everyone who signed it. Not that I was ever employed by them, I also wouldn't be confident enough to do this, and I wish it were any other way. This is inspiring, thank you.

bufioyesterday at 7:11 PM

Hacker news?

asmoryesterday at 8:28 AM

This is the line? Really?

Not all the other shit this administration has been doing?

God, I hate it here.

yoyohello13yesterday at 1:14 AM

I hope Anthropic will survive this. If they don’t it will just be perfect proof that you cannot be both moral and successful in the US.

show 9 replies
paganelyesterday at 8:40 AM

Jeff Dean could have done a lot of good and add his name to the list of signatories, seeing as how leaf of AI at Google or some such. He was supposed to be this super-smart dude, I guess he’s far from that.

Huge props for the the Google and OpenAI engineers that did sign this, for those that did realize that they’re fighting for a greater thing, not just for an extra zero or two added at the end of their bank accounts. Especially as they’re taking a great amount of risk by doing it, first of all, imo they are risking their current employment status.

dluanyesterday at 3:53 AM

oops turns out you will all be divided

singlewindyesterday at 11:05 AM

The beauty of balance is someone can say yes and someone can say no. No matter how good do you calculate there is theory behind.

ReptileManyesterday at 5:49 AM

It is really nice to see employees creating lists for the next round of laoffs themselves.

paradoxylyesterday at 6:52 AM

More Far Left treason, documented.

blaze998yesterday at 2:19 AM

December 14, 2024

>After famed investor Marc Andreessen met with government officials about the future of tech last May, he was “very scared” and described the meetings as “absolutely horrifying.” These meetings played a key role on why he endorsed Trump, he told journalist Bari Weiss this week on her podcast.

>What scared him most was what some said about the government’s role in AI, and what he described as a young staff who were “radicalized” and “out for blood” and whose policy ideas would be “damaging” to his and Silicon Valley’s interests.

>He walked away believing they endorsed having the government control AI to the point of being market makers, allowing only a couple of companies who cooperated with the government to thrive. He felt they discouraged his investments in AI. “They actually said flat out to us, ‘don't do AI startups like, don't fund AI startups,” he said.

...

keep making petitions, watch the whole thing burn to the ground when Trump decides to channel the Biden ideas in this field.

csneekyyesterday at 4:38 AM

Claude is better for much than GPT atm. You really think the government is going to hamstring the engineering of weapons and intelligence capabilities by not using it?

chkaloonyesterday at 5:09 AM

Too late

lazzlazzlazzyesterday at 4:48 AM

The signatories of this site are leaping at a misguided opportunity for moral credit, when the reality is that they're getting whipped into a left-leaning frenzy.

As Undersecretary Jeremy Lewin clarified today[1], these weighty decisions should not be made by activists inside companies, but made by laws and legitimate government.

[1]: https://x.com/UnderSecretaryF/status/2027594072811098230

lovichyesterday at 1:14 AM

You’re kinda already conceding to some of your opponents points when you use legally invalid names like “Department of War”

I appreciate the sentiment but don’t preconcede to your opposition by using their framing.

show 4 replies
jurschreuderyesterday at 8:18 AM

They always already wanted it to be Grok, Grok is the only, what they call "not woke AI".

ameliusyesterday at 10:24 AM

Hegseth is discovering the shittiness of the SaaS model.

uwagaryesterday at 11:57 AM

isnt the pentagon just asking for total access to source code and data silos of anthropic and openai...that we cant ask because its proprietary software?

Samarrrtthhyesterday at 10:27 AM

why

senderistayesterday at 4:42 AM

"We hope our leaders will put aside their differences and stand together"

nullbyteyesterday at 1:10 AM

"He will not divide us!"

show 2 replies
alsufinowyesterday at 8:56 AM

W

mooglyyesterday at 2:31 AM

We have international laws and rules of war. We have weapon treaties (well, some of them are expiring). Sure, not everyone is signatory, or even follow the conventions they have ratified, but at least having these things in place makes it even remotely possible to categorize and document violations and start processes towards rulebreakers and antihumanist actions.

So I looked into what they cooked up in 2023, plus which countries signed it (scroll down to a link to the actual text). It's an extraordinarily pathetic text. Insulting even.

https://www.state.gov/bureau-of-arms-control-deterrence-and-...

HardCodedBiasyesterday at 7:07 AM

So much insanity.

Anthropic wanted a veto on use of force by USG. That is intolerable, no private party can have a veto over the sovereign. It is that simple.

Anthropic should have just walked away (and taken the settlement lumps) when they realized that the USG knew. But no, they started some crazy campaign.

This is so irrational on Anthropic. Purchasing managers across the US (and the world) have to understand now that while Anthropic has the best model on the planet, it is not rational (if you prefer it is not rational in ways commonly understood). It is a risk and must be treated as such.

HWNDUS7yesterday at 6:08 AM

Sweet. Looking forward to another CTF season of He Will Not Divide Us.

I love performative acts of wealthy Silicon Valley drags.

ineedaj0byesterday at 9:04 AM

really dumb. you don’t win this

rybosworldyesterday at 3:05 PM

I don't love talking politics on this site. Hackernews has done a pretty decent job of staying non-political and I think that's been a positive thing.

AI is re-shaping American society in a lot of ways. And this is happening at a time where the U.S. is more politically divided than it's ever been. People who use LLMs regularly (most SWEs at this point) can understand the danger signs. The bad outcomes are not inevitable. But the conversations around this cannot only be held in internet forums and blogposts.

Hackernews is an echo chamber of early adopters of tech. The discussions had here don't percolate to the general population.

I believe many of us have a duty to make this feel real to the less technical people in our lives. Too many folks have an information filter that is one of Fox News/CNN/MSNBC. Fox is the worst on misinformation. The others are also bad. Their viewers will not hear, in any clear way, how the Trump admin is trying to bully AI companies into doing what it wants. This will be a headline or an article. A footnote not given the attention it deserves.

Plainly: there is an attempt to turn AI into a political weapon aimed at the general population. Misinformation and surveillance are already out of control. If you can, imagine that getting worse.

This feels like one of those hinge moments. If you can, have real-life conversations with people around you. Explain what's at stake and why it matters now, not later.

verdvermyesterday at 1:51 AM

Use the feedback forms within their platforms to let the companies know your thoughts

fzeroraceryesterday at 1:43 AM

It's rather amusing that this is the proverbial 'red line', not y'know, everything else this administration has been tearing up and running roughshod over. Maybe this would've been less of an issue if companies were more proactive about this bullshit in the first place?

That's why it's hard for me to feel bad about companies suddenly finding themselves on the receiving end. They dug their grave inch by inch and are suddenly surprised when they get shoved into it.

imiricyesterday at 7:37 AM

The levels of irony in this case are staggering.

The employees of these companies are complicit in creating the greatest data harvesting and manipulation machine ever built, whose use cases have yet to be fully realized, yet when the government wants to use it for what governments do best—which was reasonable to expect given the corporate-government symbiosis we've been living in for decades—then it's a step too far?

Give me a fucking break. Stop the performative outrage, and go enjoy the fruits of your labor like the rest of the elites you're destroying the world with.

alfiedotwtfyesterday at 3:16 AM

It would be funny in the end if the only ones left to not say no to Trump were Alibaba

krautburglaryesterday at 2:54 AM

You have 1) stolen everybody's shit and put it behind a paywall, 2) cornered the hardware market in some RICO-worthy offensive that has priced one of the few affordable pasttimes for young people out of reach, 3) changed your climate story (lie) on a dime, and started putting the horrible power-guzzling data centers on any strip of land within spitting distance of a power plant. I hope you all go out of business, and I hope it happens French Revolution style.

Of course they were going to use it for military purposes you spiritual abortions, and there is nothing your keyboard-soft hands can do about it.

dupedyesterday at 2:48 AM

The Department of War doesn't exist, don't meet the fascists on their own terms at any level. They don't debate or operate in good faith.

jackblemmingyesterday at 2:10 AM

So big tech wants to court Trump with millions in donations and now that the big bully they supported is bullying them.. we’re supposed to feel some kind of sympathy? Am I missing something here? Why did Anthropic get involved with the military in the first place?

verisimiyesterday at 5:03 AM

It's great that people are taking a moral position re their work, and are seemingly prepared to take a bit of a risk in expressing themselves.

However, if we're honest, Google has a long history of selling 'the people' out on domestic surveillance. There is even a good argument that this is what it was created for in the first place, given it was seeded with money from inqtel, the CIA venture capital fund. So, while I commend acting with your conscience in this (rather minor) case, and I'm glad to see people attempt to draw a line somewhere, what will this really come to? I strongly suspect this is event itself is just theater for the masses, where corporates and their employees get to stand up to government (yay!). The reality is probably all that is being complained about, and far worse, has been going on for years.

How far would these signatories go? Would they be prepared to walk away from all that money? Will they stop the rest of the dystopian coding/legislation writing, or is that stuff still ok (not that evil)?

Ultimately, is gaining the money worth the loss of one's soul? If you know better, and know that it is wrong to assist corporations and governments in cleaving people open for profit and control, but do it anyway for the house, private schools, holidays, Ferrari, only taking a stand in these performative, semi-sanctioned events - is this really the standard you accept for yourself? If so, then no problem. If not, what exactly are you doing the rest of the time? Are you able to switch your morality/heart/soul off? Judge yourself. If you find you are not acting in accord with yourself, everything is already lost.

nilespotteryesterday at 4:42 AM

These models are weapons whether the frontier provider founders and their trite and lofty mission statements like it or not.

Private individuals and private companies do not get to create a defensive weapon with unprecedented power in a new category in the US and not share it with the US military.

You guys are batshit insane.

remarkEonyesterday at 2:22 AM

This whole episode is very bizarre.

Anthropic appears to be situating themselves where they are set up as the "ethical AI" in the mindspace of, well, anyone paying attention. But I am still trying to figure out where exactly Hegseth, or anyone in DoW, asked Anthropic to conduct illegal domestic spying or launch a system that removes HITL kill chains. Is this all just some big hypothetical that we're all debating (hallucinating)? This[1] appears to be the memo that may (or may not) have caused Hagesth and Dario to go at each other so hard, presumably over this paragraph:

>Clarifying "Responsible Al" at the DoW - Out with Utopian Idealism, In with Hard-Nosed Realism. Diversity, Equity, and Inclusion and social ideology have no place in the DoW, so we must not employ AI models which incorporate ideological "tuning" that interferes with their ability to provide objectively truthful responses to user prompts. The Department must also utilize models free from usage policy constraints that may limit lawful military applications. Therefore, I direct the CDAO to establish benchmarks for model objectivity as a primary procurement criterion within 90 days, and I direct the Under Secretary of War for Acquisition and Sustainment to incorporate standard "any lawful use" language into any DoW contract through which AI services are procured within 180 days. I also direct the CDAO to.ensure all existing AI policy guidance at the Department aligns with the directives laid out in this memorandum.

So, the "any lawful use" language makes me think that Dario et al have a basket of uses in their minds that they feel should be illegal, but are not currently, and they want to condition further participation in this defense program on not being required to engage in such activity that they deem ought be illegal.

It is no surprise that the government is reacting poorly to this. Without commenting on the ethics of AI-enabled surveillance or non-HITL kill chains, which are fraught, I understand why a department of government charged with making war is uninterested in debating this as terms of the contract itself. Perhaps the best place for that is Congress (good luck), but to remind: the adversary that these people are all thinking about here is PRC, who does not give a single shit about anyone's feelings on whether it's ethical or not to allow a drone system to drop ordinance on it's own.

[1] https://media.defense.gov/2026/Jan/12/2003855671/-1/-1/0/ART...

sensanatyyesterday at 10:54 AM

I'm going to copy a comment I made in a related thread:

I might be being a bit conspiratorial, but is anyone else not buying this whole song and dance, from either side? Anthropic keeps talking about their safeguards or whatever, but seeing their marketing tactics historically it just reads more like trying to posture and get good PR for "fighting the system" or whatever.

"Our AI is so advanced and dangerous Trump has to beg us to remove our safeguards, and we valiantly said no! Oh but we were already spying on people and letting them use our AIs in weapons as long as a human was there to tick a checkbox. Also, once our models improve enough then we'll be sending in The Borg to autonomously target our Enemies™"

I just don't buy anything spewing out of the mouths of these sociopathic billionaires, and I trust the current ponzi schemers in the US gov't even less.

Especially given how much astroturfing Anthropic loves doing, and the countless comments in this thread saying things like "Way to go Amodei, I'm subbing to your 200 dollar a month plan now forever!!11".

One thing I know for sure is that these AI degenerates have made me a lot more paranoid of anything I read online.

nobodywillobsrvyesterday at 6:25 AM

It really feels like I am no longer impressed with Anthropic safety.

Do they have even a basic understanding of the different regimes of safety and what allegiance means to ones own state?

It would be fine to say they are opting out of all forms of protection against adversaries.

But it feels like just more insane naive tech bro stuff.

As someone outside the tech bro bubble in fintech in London, can somebody explain this in a way that doesn't indicate these are sort of kids in a playground who think there is no such thing as the wolf?

Again, opting out and specializing in tech that you are going to provide to your enemies AND friends alike is fine. That is a good specialization. But this is not what I hear. I hear protest songs not deep thinking of thousand year mind set.

politicianyesterday at 4:21 AM

I simply do not understand why Americans tech companies and their employees will hew and cry about supporting the military. For those of you who support their position, have you ever stopped to consider that your safe, comfortable lives of free speech and protests and TikTok and food and gas and Amazon Next-Day deliveries is enabled by a massive nuclear deterrent operated by the very military you oppose?

It is just so disappointing to come here and read these naive takes. Yes, Anthropic should be compelled to support the military using the DPA if necessary.

show 2 replies
hakrgrlyesterday at 3:36 AM

1.5 hours after this was posted, Sam Altman stated openai will work with the DoW.

So much for this waste of a domain name. https://x.com/sama/status/2027578652477821175

"Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. "

show 18 replies

🔗 View 19 more comments