logoalt Hacker News

agdexaitoday at 9:08 AM0 repliesview on HN

Good breakdown. I'd add a layer to point 2: beyond deciding when to use the LLM, there's a separate question of which tool in the LLM ecosystem fits the task.

For haystack-style debugging (searching logs, grepping stack traces), a fast cheap model with large context (Gemini Flash, Claude Haiku) is more cost-effective than a frontier model. For the conceptual fault category you mention — where you actually need to reason about system design — that's when it might be worth paying for o3/Claude Opus class models.

The friction is that most people default to whatever chatbot they have open, rather than routing to the right tool. The agent/LLM tooling space has gotten good enough that this routing is automatable, but most devs haven't set it up yet.