A moral stand? ... What? Did we read the same statement? It opens right out the gate with:
>I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.
>Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.
which I find frankly disgusting.
They are undeniably taking a moral stand. Among other things, the statement explains that there are two use cases that they refuse to do. This is a moral stand. It might not align with your morals, but it's still a moral stand.
I feel like the deepest technical definition of autocratic is “fully autonomous weapons”?
Freedom isn’t free. Someone has to defend the democratic values that you and I take for granted.
Dario’s statement is in support of the institution, not the current administration.