This assumes that these companies aren't going to use smaller providers or hosting models themselves. THAT is the great big assumption going into all the Big AI funding.
I think it's a very, very bad assumption. After trying GLM-5 and Qwen3 on Ollama Cloud, not only were they faster than OpenAI's offerings (by a huge amount) it was just as good if not better at doing what I asked of it.
Claude Code is still superior to anything else but GLM-5 and Qwen3 are easily just as good as GPT-5.X (for coding).