logoalt Hacker News

danawyesterday at 8:29 PM11 repliesview on HN

i have a strong suspicion that the most productive software teams that leverage llms to build quality software will use it for the following:

- intelligent autocomplete: the "OG" llm use for most developers where the generated code is just an extension of your active thought process. where you maintain the context of the code being worked on, rather than outsourcing your thinking to the llm

- brainstorming: llms can be excellent at taking a nebulous concept/idea/direction and expand on it in novel ways that can spark creativity

- troubleshooting: llms are quite good at debugging an issue like a package conflict, random exception, bug report, etc and help guide the developer to the root cause. llms can be very useful when you're stuck and you don't have a teammate one chair over to reach out to

- code review: our team has gotten a lot of value out of AI code review which tends to find at least a few things human reviewers miss. they're not a replacement for human code review but they're more akin to a smarter linting step

- POCs: llms can be good at generating a variety of approaches to a problem that can then be used as inspiration for a more thoughtfully built solution

these uses accelerate development while still putting the onus on the developers to know what they're building and why.

related, i feel it's likely teams that go "all in" on agentic coding are going to inadvertently sabotage their product and their teams in the long run.


Replies

Meradyesterday at 9:34 PM

> intelligent autocomplete

I'm curious how much value others are finding in this. Personally I turned it off about a year ago and went back to traditional (jetbrains) IDE autocomplete. In my experience the AI suggestions would predict exactly what I wanted < 1% of the time, were useful perhaps 10% of the time, and otherwise were simply wrong and annoying. Standard IDE features allowing me to quickly search and/or browse methods, variables, etc. are far more useful for translating my thoughts into code (i.e. minimizing typing).

show 4 replies
joriswtoday at 10:48 AM

FWIW I was watching an interview with the founder of Claude Code and he claims that at Anthropic, no code is written by hand anymore.

https://www.youtube.com/watch?v=SlGRN8jh2RI&pp=0gcJCQMLAYcqI...

show 1 reply
proofofcontemptyesterday at 8:54 PM

I'm with you on all apart from code review.

Our team has tried a couple tools. Most of the issues highlighted are either very surface level or non-issues. When it reviews code from the less competent team members, it misses deeper issues which human review has caught, such as when the wrong change has been made to solve a problem which could be solved a better way.

Our manager uses it as evidence to affirm his bias that we don't know what we're doing. It got to the point that he was using a code review tool and pasting the emoji littered output into the PR comments. When we addressed some of the minor issues (extra whitespace for example) he'd post "code review round 2". Very demoralising and some members of the team ended up giving up on reviewing altogether and just approving PRs.

I think it's ok to review your own code but I don't think it should be an enforced constraint in a process, because the entire point of code review from the start was to invest time in helping one another improve. When that is outsourced to a machine, it breaks down the social contract within the team.

show 5 replies
tardedmemetoday at 10:25 AM

I'd add rapid mockups/prototyping as well. Not suitable for production use but very suitable for iterating until it looks right, and then you go and make it for real.

marcosdumayyesterday at 9:41 PM

On troubleshooting, either LLMs used to be better, or I'm in a huge bad luck strake. All of the last few times I tried to ask one, I've got a perfectly believable and completely wrong answer that weren't even on the right subject.

On code review, the amount of false positives is absolutely overwhelming. And I see no reason for that to improve.

But yes, LLMs can probably help on those lines.

show 2 replies
bsimpsontoday at 12:39 AM

I usually use git and open source tooling, but I've been working with our internal tech stack recently. It includes an editor with AI-powered autocomplete, and it drives me crazy.

It populates suggestions nearly instantly, which is constantly distracting. They're often wrong (either not the comment I was leaving, or code that's not valid). Most of the normal navigation keys implicitly accept the suggestion, so I spend an annoying amount of time editing code I didn't write, and fighting with the tool to STFU and let me work. Sometimes I'll try what it suggests only to find out that it doesn't build or is broken in other stupid ways.

All of this with the constant anxiety to "be more productive because AI."

show 1 reply
wg0today at 12:11 AM

This is one of the most insightful comment I've read on the subject in a a while minus the code review.

All the described use cases are good enough for AI except code review which is hit or miss.

But agentic coding is a snake oil.

show 1 reply
rprendtoday at 3:33 AM

the most productive teams will be the ones that treat code as compiler output (which we never read)

legacy manual codebases which require human review will be the new "maintaining a FORTRAN mainframe". they'll stick around for longer than you'd expect (because they still work) , at legacy stagnant engineering companies

show 1 reply
hnthrow0287345yesterday at 10:37 PM

Even generating a first-pass of the eventual production code that you can step back and review is useful to get ideas, so long as you guard yourself against laziness of going with the first answer it provides

show 1 reply
dude250711yesterday at 8:43 PM

> related, i feel it's likely teams that go "all in" on agentic coding are going to inadvertently sabotage their product and their teams in the long run.

They are trying to get warm by pissing their pants.

show 1 reply
anthonypasqyesterday at 8:53 PM

people have been making some version of this comment for the past three years, and the only thing that has changes is that you keep adding capabilities.

2 years ago people were saying it was purely autocomplete and enhanced google.

AI bears just continue to eat shit year after year and keep pretending they didnt say that AI would never be capable of what its currently capable of.

show 1 reply