OpenAI, Anthropic, Google, Microsoft certainly desire path dependence but the very nature of LLMs and intelligence itself might make that hard unless they can develop models which truly are differentiated (and better) from the rest. The Chinese open source models catching up make me suspect that won't happen. The models will just be a commodity. There is a countdown clock for when we can get Opus 4.6+ level models and its measured in months.
The reason these LLM tools being good is they can "just do stuff." Anthropic bans third party subscription auth? I'll just have my other tool use Claude Code in tmux. If third party agents can be banned from doing stuff (some advanced always on spyware or whatever), then a large chunk of the promise of AI is dead.
Amp just announced today they are dumping IDE integration. Models seem to run better on bare-bones software like Pi, and you can add or remove stuff on the fly because the whole things open source. The software writes itself. Is Microsoft just trying to cram a whole new paradigm in to an old package? Kind of like a computer printer. It will be a big business, but it isn't the future.
At scale, the end provider ultimately has to serve the inference -- they need the hardware, data centers & the electricity to power those data centers. Someone like Microsoft can also provide a SLA and price such appropriately. I'll avoid a $200/month customer acquisition cost rant, but one user, running a bunch of sub agents, can spend a ton of money. If you don't own a business or funding source, the way state of the art LLMs are being used today is totally uneconomical (easy $200+ an hour at API prices.)
36+ months out, if they overbuild the data centers and the revenue doesn't come in like OpenAI & Anthropic are forecasting, there will be a glut of hardware. If that's the case I'd expect local model usage will scale up too and it will get more difficult for enterprise providers.
(Nothing is certain but some things have become a bit more obvious than they were 6 months ago.)
Thinking about this a little more -> "nature of LLMs and intelligence"
Bloated apps are a material disadvantage. If I'm in a competitive industry that slow down alone can mean failure. The only thing Claude Code has going for it now is the loss making $200 month subsidy. Is there any conceivable GUI overlay that Anthropic or OpenAI can add to make their software better than the current terminal apps? Sure, for certain edge cases, but then why isn't the user building those themselves? 24 months ago we could have said that's too hard, but that isn't the case in 2026.
Microsoft added all of this stuff in to Windows, and it's a 5 alarm fire. Stuff that used to be usable is a mess and really slow. Running linux with Claude Code, Codex, or Pi is clearly superior to having a Windows device with neither (if it wasn't possible to run these in Windows; just a hypothetical.)
From the business/enterprise perspective - there is no single most important thing, but having an environment that is reliable and predictable is high up there. Monday morning, an the Anthropic API endpoint is down, uh oh! In the longer term, businesses will really want to control both the model and the software that interfaces with it.
If the end game is just the same as talking to the Star Trek computer, and competitors are narrowing gaps rather than widening them (e.g. Anthropic and OpenAI releases models minutes from each other now, Chinese frontier models getting closer in capability not further), then it is really hard to see how either company achieves a vertical lock down.
We could actually move down the stack, and then the real problem for OpenAI and Anthropic is nVidia. 2030, the data center expansion is bust, nVidia starts selling all of these cards to consumers directly and has a huge financial incentive to make sure the performant local models exist. Everyone in the semiconductor supply chain below nvidia only cares about keeping sales going, so it stops with them.
Maybe nvidia is the real winner?
Also is it just me or does it now feel like hn comments are just talking to a future LLM?