logoalt Hacker News

Agentic Coding Is a Trap

391 pointsby ayoisaiahyesterday at 10:52 PM282 commentsview on HN

Comments

threethirtytwotoday at 2:30 AM

I think AI will evolve to the point where it produces working, bug free code. But that code won't necessarily be that readable, clean or modular. In the future the complexity or how "bad" the code is won't matter because the LLM will deal with the complexity and clean up the messes automatically. Your code wasn't modular enough to account for a certain new feature? Well the LLM will simply make it modular enough. Is the code too hacky to fix a bug? The LLM will make it less hacky if it was too hacky in the first place. OR the LLM can deal with the hackiness. That is the future. Your skills will atrophy in the same way humanities skills with the slide rule has atrophied.

Going against the grain here which statistically is more likely to be right given how HN was so wrong about self driving and AI being useless for coding. I think HNers given that their identity is tied around coding are of course going to defend that identity till the bitter end in the same way artists did.

komali2today at 2:29 AM

> When working on something new or something challenging, me typing out code is the process by which I figure out what we should even be doing.

This is really validating to read. I recently was having a call with a friend where I was arguing against 100% AI usage, and I was saying, some problems the LLM just can't solve. He asked for an example, and I tried to explain a complex chart I was trying to make at a previous gig, and in the end said "well to be fair neither the AI or I could figure it out lol." He replied "how could you even code it if you didn't know exactly what you were trying to build? You're supposed to know exactly what you're building before you write a single line of code, that's what they teach you in school."

He was poking fun at the fact that I have a boot camp background and he has a uni degree - it's been ten years for both of us now so he's running out of ways to poke fun at that difference as we even out our differences, but this one poke brought back about the old imposter syndrome, since my entire career, I've thought via coding.

When I get a ticket, I tend to jump into the codebase to figure out the context I need to know about, the current patterns, what files I'll need to worry about; and while I'm there, I tend to start writing some things, and as I do that I pull in a shared function, and in doing so just check out of curiosity where else the function is used, and in doing so discover oh, actually, we have similar functionality elsewhere, lemme just abstract this work for this ticket and the previous functionality into a shared function, and use it in both places. And so on. Before I know it, I'm looking back at the ticket checking if I've covered everything, and sending in the PR.

I've never had complaints about my productivity, in fact I'm often lauded for it so I think it at least hasn't been a process that slows me down long term even if it's meassier. But I had been wondering if it makes me less than a "real" engineer. I'm happy to hear others may doing it this way too.

threethirtytwotoday at 2:14 AM

Same thing happened with assembly language.

oliv__today at 1:57 AM

I think the best way to go about this is to start with a manually coded codebase outlining the basic structure of your app (even if ported from some other project) so that you basically define the code "palette" and THEN using AI to add features / edit stuff.

It won't do everything exactly the way you would've coded it but I find this model much better at setting and maintaining "guardrails" for your codebase so you don't find yourself wondering how it all fits together.

luxuryballstoday at 1:49 AM

If you just scale it back a little bit so you’re having the agent write methods, services, tests, scaffolding, etc, and keep it concise, you can get a lot of productivity gain without giving up your control of the codebase. It feels like some developers are leaning too far into the “vibe coding” but I was getting a lot of accelerated development years ago when I was still just asking the chat window for code, there is def a sort of laziness trap.

everyonetoday at 1:16 AM

Im seeing the word "agentic" a lot here. Is there a difference between "Agentic Coding" and "I put prompt into gpt or claude and pasted code into my file" ?

show 5 replies
slopinthebagtoday at 1:13 AM

I think ignoring all else, generating code is not a new layer of abstraction. It's the same abstraction, we just have codegen machines now. The same skills are important regardless if the person is typing in the code or if a machine is producing it.

EGregtoday at 1:05 AM

Agents are a first-generation technology. They propose and act at the same time. I recommend you read https://safebots.ai/agents.html

show 1 reply
phendrenad2today at 1:46 AM

> This is the sentiment being hyped up around the industry currently: traditional coding is all but dead, and Spec Driven Development (SDD) is the future. You generate a plan, and disconnect from writing any code

Agentic agile > agentic waterfall (at least for now)

Don't give the AI a spec, work with it every step of the way.

> pulls the slot machine lever over and over (link to "One More Prompt: The Dopamine Trap of Agentic Coding")

I'm sure the first cave-person to discover how to make fire was equally "addicted" to making fires. That doesn't really say anything about the underlying technology.

> An increase in the complexity of the surrounding systems to mitigate the increased ambiguity of AI's non-determinism

I don't know what this means, exactly. Anyone have any ideas?

> Atrophying skills for a wide swath of the population

This is very real and something we're going to have to contend with. Software can't really become less complex, and there's a minimum amount of knowledge you need, with or without AIs there to help you. We may need specialized training academies for developers where they spend a few years without AI to learn to program, and then are given a few years of AI programming.

> Vendor lock-in for individuals and entire teams

This isn't really a big program, you can always switch AI providers if there's frequent downtime.

> only a skilled developer who's thinking critically, and comfortable operating at the architectural level, can spot issues in the thousands of lines of generated code, before they become a problem

Agreed...

> Yet, in an ironic twist of fate, it's the individual's critical thinking skills and cognitive clarity that AI tooling has now been proven to impact negatively.

...well, yes and no. AI tooling can help you _reduce_ cognitive debt. Picture this: There is one senior developer (Person A) on the team who understands Service X. Your other developers could schedule time with Person A to get an understanding. Or, they could ask the AI to analyze the project and explain it to them. This scales much better, and if Person A is a poor communicator (let's face it, many senior engineers are), it might be the only working option.

dalekkskarotoday at 9:35 AM

[dead]

panavmtoday at 4:03 AM

[flagged]

ninjahawk1today at 12:48 AM

[dead]

banqtoday at 2:28 AM

[dead]

josutoday at 12:50 AM

[dead]

volume_techtoday at 12:33 AM

[flagged]

xwowsersxtoday at 2:17 AM

[flagged]

vicchenaitoday at 12:39 AM

[dead]

WaTranslatorProtoday at 12:22 AM

[dead]

gyanchawdharytoday at 12:16 AM

[dead]