OpenAI is playing games.
When Anthropic says they have red lines, they mean "We refuse to let you use our models for these ends, even if it means losing nearly a billion dollars in business."
When OpenAI says they have red lines, they mean "We are going to let the DoD do whatever the hell they want, but we will shake our fist at them while they do it."
That's why they got the contract. The DoD was clear about what they wanted, and OpenAI wasn't going to get anywhere without agreeing to that. They're about as transparent as Mac from It's Always Sunny in Philadelphia when he's telling everyone he's playing both sides.
> but we will shake our fist at them while they do it
Not even that. They are not shaking anything except their booty.
Personally I think OpenAI is intending to infiltrate their political enemy's stronghold and look for ways to leak data to "get Trump" as per usual.
They'll say "oops" and then we'll spend the next few years listening to pointless Congressional hearings.
Isn't it simpler to say that anthropic adopted a values based use approach and openai adopted a legal one?
Or In other words you can get to decide two ways to use a lucrative property:
1. designate it private and draft usage of how you allow to use it, per your value system(as long as values don't violate any laws)
2. In face of competition, give up some values and agree to a legal definition of use that favors you.
"Red lines" does not mean some philosophical line they will not cross.
"Redlines" are edits to a contract, sent by lawyers to the other party they're negotiating with. They show up in Word's Track Changes mode as red strikethrough for deleted content.
They are negotiating the specifics of a contract, and Anthropic's contract was overly limiting to the DoD, whereas OpenAI's was not.