In a single challenge, measured by how performant the solution was.
Kimi K2.6 is definitely a frontier-sized model, so on the one hand it's not that surprising it's up there with the closed frontier models.
Being open is nice though, even though it doesn't matter that much for folks like me with a single consumer GPU.
>Being open is nice though, even though it doesn't matter that much for folks like me with a single consumer GPU.
Of course it matters because that makes coding plans much cheaper than those from Anthropic and OpenAI.
For personal use I have coding plans with GLM 5.1, Kimi K2.6, MiniMax M2.7 and Xiaomi MiMo V2.5 Pro and I am getting a lot of bang for the buck.
It absolutely does matter.
The enshittification will go unnoticed at first but I'm already finding my favourite frontier models severely nerfed, doing incredibly dumb stuff they weren't in the past.
We need open weight models to have a stable "platform" when we rely on them, which we do more and more.
This is the future though. Open weights models that run on H200s provide far more opportunity to build products and real infrastructure around.
You can always distill this for your little RTX at home. But models shaped for consumer hardware will never win wide adoption or remain competitive with frontier labs.
This is something that _can_ compete. And it will both necessitate and inspire a new generation of open cloud infra to run inference. "Push button, deploy" or "Push button, fine tune" shaped products at the start, then far more advanced products that only open weights not locked behind an API can accomplish.
Now we just need open weights Nano Banana Pro / GPT Image 2, and Seedance 2.0 equivalents.
The battle and focus should be on open weights for the data center.
[dead]
[flagged]
> Being open is nice though, even though it doesn't matter that much for folks like me with a single consumer GPU.
The value of open source is not that you will run it locally, it's that anyone can run it at all.
Even if you can't afford to purchase the hardware to run large open source models, someone would, price it at half the cost of the closed source models and still make a profit.
The only reason you are not seeing that happen right now is because the current front-running token-providers have subsidised their inference costs.
The minute they start their enshittification the market for alternatives becomes viable. Without open-source models, there will never be a viable alternative.
Even if they wanted to charge only 80% of what a developer costs, the existence of open source models that are not far behind is a forcing function on them. There is no moat for them.