logoalt Hacker News

dgdosentoday at 3:16 PM3 repliesview on HN

Is it me, or will this just speed up the timeline where a 'good enough' open model (Qwen? Deepseek? - I'm sure the Chinese will see a value in undermining OpenAI/Anthropic/Google) combined with good enough/cheap hardware (10x inference improvement in a M7 Macbook Air?) makes running something like opencode code locally a no brainer?


Replies

ac29today at 3:51 PM

The good enough alternative models are here or will be soon, depending on your definition of good enough. MiniMax-M2.5 looks really competitive and its a tenth of the cost of Sonnet-4.6 (they also have subscriptions).

Running locally is going to require a lot of memory, compute, and energy for the foreseeable future which makes it really hard to compete with ~$20/mo subscriptions.

kevstevtoday at 4:47 PM

Personally I am already there- I go to Qwen and Deepseek locally via ollama for my dumb questions and small tasks, and only go to Claude if they fail. I do this partially because I am just so tired of everything I do over a network being logged, tracked, mined and monetized, and also partially because I would like my end state to be using all local tools, at least for personal stuff.

irishcoffeetoday at 3:21 PM

People running models locally has always been the scare for the sama's of the world. "Wait, I don't need you to generate these responses for me? I can get the same results myself?"

show 1 reply