logoalt Hacker News

brainlesslast Saturday at 5:32 AM0 repliesview on HN

The skills approach is great for agents and LLMs but I feel agents have to become wider in the context they keep and more proactive in the orchestration.

I have been running Claude Code with simple prompts (eg 1) to orchestrate opencode when I do large refactors. I have also tried generating orchestration scripts instead. Like, generate a list of tasks at a high level. Have a script go task by task, create a small task level prompt (use a good model) and pass on the task to agent (with cheaper model). Keeping context low and focused has many benefits. You can use cheaper models for simple, small and well-scoped tasks.

This brings me to skills. In my product, nocodo, I am building a heavier agent which will keep track of a project, past prompts, skills needed and use the right agents for the job. Agents are basically a mix of system prompt and tools. All selected on the fly. User does not even have to generate/maintain skills docs. I can get them generated and maintained with high quality models from existing code in the project or tasks at hand.

1 Example prompt I recently used: Please read GitHub issue #9. We have phases clearly marked. Analyze the work and codebase. Use opencode, which is a coding agent installed. Check `opencode --help` about how to run a prompt in non-interactive mode. Pass each phase to opencode, one phase at a time. Add extra context you think is needed to get the work done. Wait for opencode to finish, then review the work for the phase. Do not work on the files directly, use opencode

My product, nocodo: https://github.com/brainless/nocodo