The bravery of the people signing this anonymously is inspiring.
The regulatory environment in the US is insane
What's crazy here is that a government I'd requiring de-regulation while companies are trying to keep stricter rules. What a time.
I'd love to see this extended to any American regardless of past/present employment with Google or OpenAI
The important thing to know is that no one wants a conflict. Don't be used for that. Don't accept that.
We protect our families when we are home. That's all everybody wants.
Shades of "He Will Not Divide Us"
>We are the employees of Google and OpenAI, two of the top AI companies in the world.
Does this mean you dipshits are going to stop your own domestic surveillance programs? You sold your souls to the devil decades ago, don't pretend like you have principles now.
Ted Kaczynski was right about technology
Is there a particular reason why the actual letter content requires JavaScript to load while everything else is readable?
We need key AI researchers at these companies to speak out - execs will not care otherwise. If Jeff Dean made this a red line, it might matter.
So these are the employees that ignore the hundreds of other atrocities their companies do against other countries, small firms, individuals, come out flags waving for some cherry-picked issues, and next day go back to their well paid jobs, vested stocks and office perks and lunch chefs to passively support these agendas further, even if they have the best career mobility across almost all industries.
I mean it's neat, but naive at best.
Does this mean there is a non zero chance we will get some kind of grok+chinese model mix that's used across the entire US military? Ironic isn't it.
Hey did someone show this to Sam? I don't think he knows.
This is game theory 100%, who's gonna be the bad guy?
> domestic mass surveillance and autonomously killing people without human oversight
spoiler alert: this is already happening
do labs in China have a choice in the matter?
No problem! The DoD^HW will just use DeepSeek!
(I wish this were a joke)
They should be collecting signatures from employees at xAI. I think they're probably most likely to fill the space left by Anthropic.
We all knew AI had the potential to be extremely powerful, and we all perused it anyways. What did we think would happen? The government/military always takes control of the most powerful/dangerous systems. If you work for a defense contractor or under ITAR then you already know this.
The right way to deal with this is political - corporate campaign contributions and lobbying. You're not going to be able to fight the military if they think they need something for national security.
Well, I think I will get the 200 sub.
Good luck with that. I just don't see either Google or OpenAI listening to their employees on this. They might have their own reasons for not wanting to help build Skynet, but if they don't, I'm sure those employees can readily be replaced with somebody more compliant.
I missing the actual letter. I think that part of the content is hidden behind some javascript. Can someone post it.
Apparently, OpenAI already folded.
https://www.cnn.com/2026/02/27/tech/openai-pentagon-deal-ai-...
A unified front from tech companies could have stood a chance, but there's too much money to be made and the imbalance of power is too great without departing the area of influence of the US government entirely (and then go where? China, UK, Australia, etc. are equally not shy of coercing commercial capabilities in pursuit of government goals, including military goals).
> Label the company a "supply chain risk"
Are they not a huge supply chain risk? Anthropic, being second chicken to OpenAI for a long time, decided to integrate tightly with the DoW. Now that their consumer products are doing better they're making decisions for the DoW as a supplier. This isn't about whether I agree with the DoW or not, it's just that behavior obviously would never fly with any customer.
The only real surprise is I haven't heard of the DoW considering Grok, which is not only a frontier model but has an existing gov cloud platform.
> We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands...
WTF does that even mean, we "hope"???!? You know they won't, what's the point of hoping? Why not quit if you have the courage, or not quit -- and shut up?
It's good that there are still empathic humans in the decision and build chain when it comes to AI systems...
Well that aged poorly.
Kneecapping the country's best AI lab seems like a bad way to win at the cyber.
> Signed,
The people who:
> steal any bit of code you put on the internet regardless of the license you use or its terms, then use it to train their models, then turn around and try to sell it to you
> made it so you can't afford new, more powerful computers or smartphones anymore, or perhaps even just replacements for the ones you already have, thanks to massive GPU, DRAM, SSD, and now even HDD shortages
> flood the internet with artificial, superficial content
> aggressively DDoS your website
Real pillars of society.
At least they're making it easy for HR.
The counterargument by the other side will always be, if we don't do it it doesn't matter because the Chinese will do it anyway - and then, common people will be at a disadvantage.
Allowing anonymous signatories only weakens the petition. Two important people signing a petition is worth more than 10000 anons.
I scrolled through a few pages and 40-60% are anonymous. Even a handful weakens the petition.
I wish more people would participate in civics . Attend your city council or local political party meeting. See what it takes to actually collect signatures, run a campaign.
Online slactivism actually just worsens the cause, because potential energy is vented on futile online “petitions” rather than taking real action.
How is posting on this website with your full name not career suicide?
And people were wondering how OpenAI will find profitability.
So now they suddenly develop a conscience? Killing education, and by implication actively dumbing the future world, putting large parts of the labor market at risk, porn fakes, and destroying artistic creation, are acceptable in the name of profit, apparently.
We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.
[90 minutes later]
Ah! Well, nevertheless
OK, this is a cheap shot on my part. But still: we hope? What kind of milquetoast martyrdom is this? Nobody gives a shit about tech workers as living, breathing, human moral agents. You (a putative moral actor signed onto this worthy undertaking) might be a person of deep feeling and high principle, and I sincerely admire you for that. But to the world at large, you're an effete button pusher who gets paid mid-six figures to automate society in accordance with billionaires' preferences and your expressions of social piety are about as meaningful as changing the flowers in the window box high up on the side of an ivory tower. The fact that ~80% of the signatories are anonymous only reinforces this perception.
If you want this to be more than a futile gesture followed by structural regret while you actively or passively contribute to whatever technologically-accelerated Bad Things come to pass in the near and medium term, a large proportion of you (> 500/648 current signatories) need to follow through and resign over the weekend. Doing so likely won't have that much direct impact, but it will slow things down a little (for the corporate and governmental bad actors who will find deployment of the new tech a little bit harder) and accelerate opposition a little (market price adjustments of elevated risk, increased debate and public rejection of the militaristic use of AI).
Hope, like other noble feelings, doesn't change anything. Actions, however poorly coordinated and incoherent, change things a little. If your principles are to have meaning, act on them during the short window of attention you have available.
Not using Claude only weakens the state. Just don’t oblige
Hegseth shared a Trump tweet a few hours ago saying they're going to quit doing business with Anthropic.
Am I the only one who is really freaking out?
They deploy BOTS to KILL PEOPLE!
This is the only big news here.
This is the only time in this timeline where we must say "you shall not pass". The ultimate red line. And there is no going back. It's just escalation in an arms race from now on. Nothing good can come out of this.
And you are talking about details, if some guys mentioned the word "domestic" in their tweet etc.
BOTS will autonomously KILL PEOPLE!
I'm regularly surprised how otherwise intelligent people with "good intentions" keep going to work at these places in the first place, then get all "surprised pikachu" when it turns out their work might go towards nefarious ends. These technologies are inherently anti-creativity and researchers have been sounding the alarms about their efficacy for mass surveillance for a long time. Even this petition only seems concerned with "domestic mass surveillance", as if the tools used by an empire abroad dont inevitably get turned inwards.
At some point its hard not to think they just cant avoid the money. At least for the SWEs these are folks who could work at much less "evil" businesses and still easily clear $150k or $200k but they just cant help themselves. This is a company that steals its training data and whose primary product is at best an anti working-class cudgel that management can use to intimidate workers and threaten them with replacement and at worse is a mass-surveillance/killing tool.
All that will happen as a result of US companies not willing to work on weapons is that the US will be made more vulnerable to adversaries, particularly the CCP who don’t care about these things.
No surprise to have not heard anything from xAI
> permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.
This sounds way worse than dystopian, Orwellian or big-brotherly, in a world where you can't even get a human to review the 'autonomously placed lock' on your email or social media account. The Terminator saga is perhaps a good fit. But I have a feeling that they won't stop even at that.
The “Department of War” DOES NOT EXIST.
OpenAI is nothing without its people
These 2 Exceptions shouldn't have to be disputed.
At this point I'd go far to say I wouldn't trust any company with my AI history that caves to DoD demands for mass domestic surveillance or fully autonomous weapons.
Your AI will know more about you than any other company, not going to be trusting that to anyone who trades ethics for profits.