logoalt Hacker News

zkmonyesterday at 4:41 PM8 repliesview on HN

Yesterday was a realization point for me. I gave a simple extraction task to Claude code with a local LLM and it "whirred" and "purred" for 10 minutes. Then I submitted the same data and prompt directly to model via llama_cpp chat UI and the model single-shotted it in under a minute. So obviously something wrong with coding agent or the way it is talking to LLM.

Now I'm looking for an extremely simple open-source coding agent. Nanocoder doesn't seem install on my Mac and it brings node-modules bloat, so no. Opencode seems not quite open-source. For now, I'm doing the work of coding agent and using llama_cpp web UI. Chugging it along fine.


Replies

syholyesterday at 4:45 PM

https://pi.dev/ seems popular, whats not open source about opencode? The repo has an MIT License.

show 4 replies
SyneRyderyesterday at 5:28 PM

Probably a silly idea, but I'll throw it into the mix - have your current AI build one for you. You can have exactly the coding agent you want, especially if you're looking for "extremely simple".

I got annoyed enough with Anthropic's weird behavior this week to actually try this, and got something workable up & running in a few days. My case was unique: there's no Claude Code for BeOS, or my older / ancient Macs, so it was easier to bootstrap & stitch something together if I really wanted an agentic coding agent on those platforms. You'll learn a lot about how models actually work in the process too, and how much crazy ridiculous bandaid patching is happening Claude Code. Though you might also appreciate some of the difficulties that the agent / harnesses have to solve too. (And to be clear, I'm still using CC when I'm on a platform that supports it.)

As for the llama_cpp vs Claude Code delays - I've run into that too. My theory is API is prioritized over Claude Code subscription traffic. API certainly feels way faster. But you're also paying significantly more.

appcustodian2yesterday at 4:53 PM

Just in case it didn't occur to you already, you can just build whatever coding agent you want. They're pretty simple

btbuildemyesterday at 8:17 PM

You'd figure by now we would have something between a TUI and an IDE.

btbuildemyesterday at 5:47 PM

You can run CC with local models, it's pretty straightforward. I've done this with vLLM + a thin shim to change the endpoint syntax.

jedisct1yesterday at 4:42 PM

Swival is not bloated and was specifically made for local agents: https://swival.dev

show 1 reply
banditelolyesterday at 5:05 PM

what model you used with llama_cpp?

show 1 reply
enraged_camelyesterday at 4:46 PM

I use both Cursor and Claude Code, and yes, the latter is noticeably slower with the same model at the same settings.

However, it's hard to justify Cursor's cost. My bill was $1,500/mo at one point, which is what encouraged me to give CC a try.