I was concerned originally when I heard that Anthropic, who often professed to being the "good guy" AI company who would always prioritize human welfare, opted to sell priority access to their models to the Pentagon in the first place.
The devil's advocate position in their favor I imagine would be that they believe some AI lab would inevitably be the one to serve the military industrial complex, and overall it's better that the one with the most inflexible moral code be the one to do it.