logoalt Hacker News

bsdertoday at 2:05 AM1 replyview on HN

1) 20 minutes is barely enough time to get into flow.

2) There are different levels of debugging. Are your eyes going to glaze over searching volumes of logs for the needle in a haystack with awk/grep/find? Fire up the LLM immediately; don't wait at all. Do the fixes seem to just be bouncing the bugs around your codebase? There is probably a conceptual fault and you should be thinking and talking to other people rather than an AI.

3) Debugging requires you to do a brain inload of a model of what you are trying to fix and then correct that model gradually with experiments until you isolate the bug. That takes time, discipline and practice. If you never practice, you won't be able to fix the problem when the LLM can't.

4) The LLM will often give you a very, very suboptimal solution when a really good one is right around the corner. However, you have to have the technical knowledge to identify that what the LLM handed you was suboptimal AND know the right magic technical words to push it down the right path. "Bad AI. No biscuit." on every response is NOT enough to make an LLM correct itself properly; it will always try to "correct" itself even if it makes things worse.


Replies

agdexaitoday at 9:08 AM

Good breakdown. I'd add a layer to point 2: beyond deciding when to use the LLM, there's a separate question of which tool in the LLM ecosystem fits the task.

For haystack-style debugging (searching logs, grepping stack traces), a fast cheap model with large context (Gemini Flash, Claude Haiku) is more cost-effective than a frontier model. For the conceptual fault category you mention — where you actually need to reason about system design — that's when it might be worth paying for o3/Claude Opus class models.

The friction is that most people default to whatever chatbot they have open, rather than routing to the right tool. The agent/LLM tooling space has gotten good enough that this routing is automatable, but most devs haven't set it up yet.