It likely won't matter much in the end, but I do think this could be a significant mistake for OpenAI.
OpenAI has two real competitors: Anthropic in the enterprise space and Google in the consumer space. Google fell far behind early on and ceded a lot of important market share to ChatGPT. They're catching up, but the runaway success of ChatGPT provides OpenAI with a huge runway among consumers.
In the enterprise space, OpenAI's partnership with Microsoft has been a gold mine. Every company on the planet has a deep relationship with Microsoft, so being able to say "hey just add this to your Microsoft plan" has been huge for OpenAI.
The thing about enterprise is the stakes are high. Every time OpenAI signals that they're not taking AI safety seriously, Anthropic pops another bottle of champagne. This is one of those moments.
Again, I doubt it matters much either way, but if OpenAI does end up blowing up, decisions like this will be in the large pile of reasons why.
This take is imo very contrarian. Is Anthropic really popping champagne? They kind of look like the bad guys in this entire saga. If not the bad guys the enemy of fun and open source builders.