It seems pretty transparent that they are heavily resource constrained, (training run for Claude 5.x, higher usage / growth than anticipated). I don’t disagree that their long play is monopolistic pricing, but what we’re observing seems better explained by the fact they have a very tight compute budget they are trying to optimize over to put as much as they can into next gen experiments / training to make sure they stay competitive over the next 6-months / year.