Anthropic wants regulatory capture to advantage itself as it hypes its products capabilities and then acts surprised when the Pentagon takes their grand claims about their products seriously as it threatens government intervention.
This is why people should support open models.
When the AI bubble collapses these EA cultists will be seen as some of the biggest charlatans of all time.
”Defense of democracy” is just another version of ”think of the children”.
Now, I'm curious. How Bedrock/Azure Claude models work?
Do these rules apply to them too?
There is no Department of War. This is the dumbest fucking timeline.
Big respect
Total humiliation for Hegseth, sure there will be a backlash
I don't think this is genuine concern, I think this is instead, veiled fear of the TDS posse being covered by feigned concern.
Foreign nationals are now embedded in the US due to decades of lax security by both parties. Domestic surveillance is now foreign surveillance also!
The framing of this is that the United States conducts legitimate operations overseas, but that is extremely far from the truth. It treats China as a foreign adversary, which is nearly purely the framing from the U.S. side as an aggressor.
AI should never be used in military contexts. It is an extremely dangerous development.
Look at how US ally Israel used non-LLM AI technology "The Gospel" and "Lavender" to justify the murder of huge numbers of civilians in their genocide of Palestinians.
Keep in mind: the government is very invested logistically in Anthropic.
So no matter what xAI or OpenAI say - if and when they replace that spend - know that they are lying. They would have caved to the DoW’s demands for mass surveillance.
Because if there were some kind of concession, it would have been simplest just to work with Anthropic.
Delete ChatGPT and Grok.
Brigadier General S. L. A. Marshall’s 1947 book Men Against Fire: The Problem of Battle Command stated that only about 10-15% of men would actually take the opportunity to fire directly at exposed enemies. The rest would typically fire in the air to merely scare off the men on the opposing force.
I personally think this is one of the most positive of human traits: we’re almost pathologically unwilling to murder others even on a battlefield with our own lives at stake!
This compulsion to avoid killing others can be trivially trained out of any AI system to make sure that they take 100% of every potential shot, massacre all available targets, and generally act like Murderbots from some Black Mirror episode.
Anyone who participates in any such research is doing work that can only be categorised as the greatest possible evil, tantamount to purposefully designing a T800 Terminator after having watched the movies.
If anyone here on HN reading this happens to be working at one of the big AI shops and you’re even tangentially involved in any such military AI project — even just cabling the servers or whatever — I figuratively spit in your eye in disgust. You deserve far, far worse.
One piece of context that everyone should keep in mind with the recent Anthropic showdown - Anthropic is trying to land British [0], Indian [1], Japanese [2], and German [3] public sector contracts.
Working with the DoD/DoW on offensive usecases would put these contracts at risk, because Anthropic most likely isn't training independent models on a nation-to-nation basis and thus would be shut out of public and even private procurement outside the US because exporting the model for offensive usecases would be export controlled but governments would demand being parity in treatment or retaliate.
This is also why countries like China, Japan, France, UAE, KSA, India, etc are training their own sovereign foundation models with government funding and backing, allowing them to use them on their terms because it was their governments that build it or funded it.
Imagine if the EU demanded sovereign cloud access from AWS right at the beginning in 2008-09. This is what most governments are now doing with foundation models because most policymakers along with a number of us in the private sector are viewing foundation models from the same lens as hyperscalers.
Frankly, I don't see any offramp other than the DPA even just to make an example out of Anthropic for the rest of the industry.
[0] - https://www.anthropic.com/news/mou-uk-government
[1] - https://www.anthropic.com/news/bengaluru-office-partnerships...
[2] - https://www.anthropic.com/news/opening-our-tokyo-office
[3] - https://job-boards.greenhouse.io/anthropic/jobs/5115692008
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[flagged]
I'm glad that anthropic is making a laughing stock of itself to the business community, it is healthy for technology progress.
The Pentagon should be using open models, not closed ones by OpenAI/Anthropic/xAI. The entire discussion of what Anthropic wants is therefore moot.
Wow, I expected them to cave, and they did'nt!
I'll be signing up to Claude again, Gemini getting kind of crap recently anyway.
The Sinophobic culture at Anthropic is worrying. Say what you will about authoritarianism, but China’s non-imperialist foreign policy means their economy is less reliant on a military-industrial complex.
All they have to do is continue to pump out exponentially more solar panels and the petrodollar will fall, possibly taking our reserve currency status with it. The U.S. seems more likely to start a hot war in the name of “democracy” as it fails to gracefully metabolize the end of its geopolitical dominance, and Dario’s rhetoric pushes us further in that direction.
My man