logoalt Hacker News

fnordpiglettoday at 12:56 AM13 repliesview on HN

Interestingly I’ve learned more about languages and systems and tools I use in the last few years working with agentic coding than I did in 35 years of artisanal programming. I am still vastly superior at making decisions about systems and techniques and approaches than the agentic tools, but they are like a really really well read intern who knows a great deal of detail about errata but have very little experience. They enthusiastically make mistakes but take feedback - at least up front - even if they often forget because they don’t totally understand and haven’t internalized it.

The claim you should know everything about everything you work on is an intensely naive one. If you’ve worked on a team of more than one there’s a lot of stuff you don’t totally grok. If you work in an old code base there’s almost every bit of it that’s unfamiliar. If you work in a massive monorepo built over decades, you’re lucky if you even understand the parts everyone considers you an expert in it.

I often get the impression folks making these claims are either very junior themselves or work basically alone or on some project for 20 years. No one who works in a team or larger org can claim they know everything in their code base. No one doing agentic programming can either. But I can at least ask the agent a question and it will be able to answer it. And after reading other people’s code for most of my adult life, I absolutely can read the LLMs. The fact a machine wrote crappy code vs a human bothers me not in the least, and at least the machine will take my feedback and act on it.


Replies

byzantinegenetoday at 2:00 AM

you have 35 years of experience and have already built up the learning capability and general framework to acquire new knowledge. you know how to use agentic coding as a tool to supplement your work. the juniors who start today don't have that, they overrely on agentic coding and do not know what they don't know

show 7 replies
RHSeegertoday at 11:32 AM

> The claim you should know everything about everything you work on is an intensely naive one

This is a slight tangent from that, but I place a lot of value on the ability to offload some/most of the mental model to AI. I need to know less about everything (involved in this one task) when working on it, because a lot of the peripheral information can be handled by the AI. I find that _incredibly_ useful.

jmuguytoday at 12:59 AM

This post does not make the claim that "you should know everything about everything you work on" - its making the claim that writing code and being able to read code effectively are intrinsically linked.

show 1 reply
grogenauttoday at 1:02 AM

Agreed. I don't know anything about turning sand into transistors or assembly but do well. So I don't know my full stack either.

What is important is not being afraid to learn the rest of your system and keeping an index.

Most importantly it's about being able to spin up on anything quickly. That's how you have wide reach. Digging in when you have to, gliding high when you have to. Appropriate level for the problem at hand.

When I was in college eons ago they taught CS folks all of engineering. "When do I need to know chem-e or analog control systems?" We asked. "You won't. You just need to be able to spin up on it enough to code it and then forget it. We're providing you a strong base."

That holds even within just large code bases.

catlifeonmarstoday at 2:42 AM

> The claim you should know everything about everything you work on is an intensely naive one.

I disagree with this take. Personally, I pride myself in learning the code bases I work on in detail, sometimes better than the leads for those code bases. I’m not saying that everyone should do so, but it’s achievable and not naive at all.

show 1 reply
girvotoday at 1:33 AM

> The claim you should know everything about everything you work on is an intensely naive one

Nothing in the article made that claim.

adrian_btoday at 7:20 AM

> The claim you should know everything about everything you work on is an intensely naive one.

It is true that you normally do not need to know everything, or even most of it.

Despite this, it is necessary to be able to discover and understand quickly anything about the project or system on which you work.

I have seen plenty of software teams that became stuck at some point because they could not solve some trivial problem that required a zoom into the project where some extra skills were required for understanding what they saw, like understanding a lower-level language, or assembly language or some less usual algorithms or networking protocols and so on.

Or otherwise they were stuck not because they lacked the skills to interpret what they saw, but because they used something that was a black box, like a proprietary library or a proprietary operating system, and it was impossible to determine what it really did instead of what it was expected to do, without being able to dive into its internals.

So I believe that the environment should always enable you to know everything about everything you work on, even if this should be only very seldom necessary.

nextaccountictoday at 6:43 AM

> even if they often forget

> But I can at least ask the agent a question and it will be able to answer it

A problem here is that, in some sense, the agent that wrote the code is not the same agent that is answering questions about it. if the original agent didn't leave their reasoning, you are probably out of luck.

There are tools like git-ai [0] that capture LLM sessions and associate each file edit to a specific agent action, and let agents query a given piece of code to read the conversation around it (what the user prompted, what was the reasoning of the LLM that created the code, etc). They could change the balance, but are not widely used

[0] https://usegitai.com/

crjohns648today at 1:33 AM

I have also seen the learning acceleration, there's a significantly increased set of techniques and technologies I have learned how to apply.

From a person perspective though, I'm apprehensive about the effect AI will have on the human "very well read intern." People who know a lot very deeply about specific areas are fascinating to talk to, but now almost everyone is able to at least emulate deep knowledge about an area through the use of AI. The productivity is there, but the human connection is missing.

valunlabstoday at 11:42 AM

i agree with this. While refactoring the outdated code at my company, i realized that “you can’t know everything.” after the refactor, i can ask the LLM questions and get answers, but unfortunately, when integrating a new feature, it keeps treating it as a “new layer.”

i_love_retrostoday at 1:57 AM

I think it's important to at least have a mental model of code you directly commit to the codebase, and that doesn't happen if it was written by an agent.

beepbooptheorytoday at 1:16 AM

"Hey! Just popping in to say that agentic coding is actually pretty great and is making me better in all the ways; but also want to say at same time that it's actually not all that different from anything else, so we can chalk up any critique to it to individual naivety and bias."

Rekindle8090today at 8:13 AM

[dead]