This frames it as Pentagon vs. Anthropic but the actual problem is upstream. If we tell companies they must prevent all possible harm, you're setting them up: nerf the model and silently lose value nobody can quantify, or don't nerf the model and get blamed for every bad outcome. We don't want nerf'd models either. DoW is saying that.
This isn’t an external directive; Anthropic was founded with the mission of creating safe, reliable AI systems. You wouldn’t see the same people working at the company if the company didn’t stand by its acceptable use policy and other internal standards