logoalt Hacker News

dial9-1today at 4:51 AM4 repliesview on HN

still waiting for the day I can comfortably run Claude Code with local llm's on MacOS with only 16gb of ram


Replies

bearjawstoday at 12:43 PM

My super uninformed theory is that local LLM will trail foundation models by about 2 years for practical use.

For example right now a lot of work is being done on improving tool calling and agentic workflows, which tool calling was first popping up around end of 2023 for local LLMs.

This is putting aside the standard benchmarks which get "benchmaxxed" by local LLMs and show impressive numbers, but when used with OpenCode rarely meet expectations. In theory Qwen3.5-397B-A17B should be nearly a Sonnet 4.6 model but it is not.

rubymamistoday at 10:10 AM

Doesn't OpenCode supports local models?

show 1 reply
gedytoday at 4:54 AM

How close is this? It says it needs 32GB min?

show 2 replies
3yr-i-frew-uptoday at 10:32 AM

[dead]