I’ve been missing agentic capabilities from almost all local LLM apps. It’s like they’re all stuck in 2023.
That’s why I started using OpenCode for this. It works pretty well, the web UI comes pretty close to a general chat app. You can use folders to organize your sessions like projects (which annoyingly Gemini still doesn’t have) with files and extra instructions.
It’s pretty powerful.
OpenCode is one solution, but there are also several alternatives.
For example pi-dev, but even Codex is open source and it should work with any locally-hosted model, e.g. by using the OpenAI-compatible API provided by llama-server.
I have not used pi-dev until now, but the recent presentation of pi-dev by its developer (reported on other HN threads) has convinced me that he is among the people who can distinguish good from bad, which unfortunately cannot be said about many people creating AI applications.
So I intend to switch to using pi-dev as a coding assistant for my locally-hosted models, but I do not have yet results demonstrating that this is the right choice, besides its lead developer being more trustworthy than the others.