The Pentagon seems to see this as a procurement issue, we bought a tool, don't tell us how to use it, and Anthropic seems concerned that the tool's nature is shaped by the constraints put on it, and we don't really understand this AI thing, and an unconstrained version could be a worse and more dangerous tool.