Shouldn't we prioritize large scale open weights and open source cloud infra?
An OpenRunPod with decent usage might encourage more non-leading labs to dump foundation models into the commons. We just need infra to run it. Distilling them down to desktop is a fool's errand. They're meant to run on DC compute.
I'm fine with running everything in the cloud as long as we own the software infra and the weights.
This is conceivably the only way we could catch up to Claude Code is to have the Chinese start releasing their best coding models and for them to get significant traction with companies calling out to hosted versions. Otherwise, we're going to be stuck in a take off scenario with no bridge.
I run Qwen3.5-plus through Alibaba’s coding plan (Model Studio): incredibly cheap, pretty fast, and decent. I can’t compare it to the highest released weight one though.