logoalt Hacker News

jumploopstoday at 4:49 AM2 repliesview on HN

This is neat, and matches an observation I saw with early Claude Code usage:

Sonnet would often call tools quickly to gather more context, whereas Opus would spend more time reasoning and trying to solve a problem with the context it had.

This led to lots of duplicated functions and slower development, though the new models (GPT-5.5 and Opus 4.6) seem to suffer from this less.

My takeaway was that “dumber” (i.e. smaller) models might be better as an agentic harness, or at least feasibly cheaper/faster to run for a large swath of problems.

I haven’t found Gemini to be particularly good at long horizon tool calling though. It might be interesting to distill traces from real Codex or Claude code sessions, where there’s long chains of tool calls between each user query.

Personally, I’d love a slightly larger model that runs easily on an e.g. 32GB M2 MBP, but with tool calling RL as the primary focus.

Some of the open weight models are getting close (Kimi, Qwen), but the quantization required to fit them on smaller machines seems to drop performance substantially.


Replies

ai_fry_ur_braintoday at 5:19 AM

The key is to not run LLMs in loops. This trend of agentic frameworks is silly, and mostly exists to make LLM companies more revenue. An LLM is mostly useless but is much more useful and reliable with one shot tooling.

I have a suite or tools ive built for myself on top of the openrouter api for very specific tasks. Press button amd LLM does (one) useful thing, not press button and let LLM run tool calls in a loop for 5 minutes and hope it does things in the correct order.

If multiple tools need to be called to do a useful thing, I will chain those together deterministically in my code. This is much more reliable as I can check the output of A before proceeding to task B or C, also its more time and token efficient. Agentic loops are a huge scam.

show 2 replies
hansmayertoday at 6:52 AM

> and matches an observation I saw with early Claude Code

> though the new models (GPT-5.5 and Opus 4.6) seem to suffer from this less

> My takeaway was that

> haven’t found Gemini to be

For the love of all that's holy, folks please stop investing your time to fill in the gaps that the Slop Corporations are leaving wide open in their "tooling". Why should you strain yourself in an attempt to "make it work" one way or another? Google, MS, Meta, OpenAI etc. are all now subtly pushing to call their tooling "Intelligence" (not even Artificial Intelligence), so why is it not intelligent? Why does it not work? 1T+ investments and still we should think of best magic chants and configurations to make the slop generators produce half-valid output? All while some of the tech leaders are openly threatening to subdue us in their weird visions of "civilisation" ? We have a better use for our superior brains, let's not denigrate ourselves into being helpless helpers to the magic oracle (if at least it was some magic oracle!)