It's a winner-takes-all market and everyone wants to be the next Google and not the next Lycos or AskJeeves etc.
It'd be interesting to see what they spend all the money on though as we seem to be hitting diminishing returns and I'm not sure if the typical enterprise user really cares about small improvements on benchmarks.
It seems like it'd probably be better to spend all that on marketing, free trials, exclusivity/bundle deals etc. ChatGPT already has a strong advantage there as it has so much brand recognition. I've seen lay people refer to all LLM's as ChatGPT like my grandparents did with Nintendo and all video game consoles.
> It's a winner-takes-all market and everyone wants to be the next Google
absolutely isn't! if billed per token, there is no reason to be married to a single model family provider at all. the models have very different strengths and weaknesses, you should be taking advantage of this at all times.
Where to go next? I don't think anyone has gotten close to automating everyday PC usage, likely via screen capture and raw keyboard+mouse inputs. Imagine how much bigger would that market be than vibecoding.
I don't think it's winner-takes-all. Google is Google in 2026 because Lycos and AskJeeves were bad in comparison. The average user doesn't care whose LLM they're using because they're all close enough. It's hard to see past the bubble bursting, but I expect most people will use multiple of them depending on context (Copilot via the integration in windows, Gemini via Siri on their phone, etc), likely without paying.
It’s absolutely not winner take all. LLMs have become a commodity and the cost of switching models is essentially nil.
Even if ChatGPT has brand recognition amongst lay people, your grandparents aren’t the ones shelling out $200/mo for a Claude code subscription and paying for extra Opus tokens on top of that. Anthropic’s revenue is now neck and neck with OpenAI, but if tomorrow they increased the price of Opus by 5x without increasing its capabilities, many would switch to Gemini, GPT 5.4, Cursor, or any cheap Chinese model. In fact I know many engineers that have multiple subscriptions active and switch when they hit the rate limits of one, precisely the tools are so interchangeable.
At some point it could even become cheaper to just buy 8x H100s and host Qwen/Deepseek/Kimi/etc yourself if you’re one of those companies paying $3k/mo per engineers in tokens.