logoalt Hacker News

aeldidiyesterday at 9:33 PM14 repliesview on HN

There's an odd trend with these sorts of posts where the author claims to have had some transformative change in their workflow brought upon by LLM coding tools, but also seemingly has nothing to show for it. To me, using the most recent ChatGPT Codex (5.3 on "Extra High" reasoning), it's incredibly obvious that while these tools are surprisingly good at doing repetitive or locally-scoped tasks, they immediately fall apart when faced with the types of things that are actually difficult in software development and require non-trivial amounts of guidance and hand-holding to get things right. This can still be useful, but is a far cry from what seems to be the online discourse right now.

As a real world example, I was told to evaluate Claude Code and ChatGPT codex at my current job since my boss had heard about them and wanted to know what it would mean for our operations. Our main environment is a C# and Typescript monorepo with 2 products being developed, and even with a pretty extensive test suite and a nearly 100 line "AGENTS.md" file, all models I tried basically fail or try to shortcut nearly every task I give it, even when using "plan mode" to give it time to come up with a plan before starting. To be fair, I was able to get it to work pretty well after giving it extremely detailed instructions and monitoring the "thinking" output and stopping it when I see something wrong there to correct it, but at that point I felt silly for spending all that effort just driving the bot instead of doing it myself.

It almost feels like this is some "open secret" which we're all pretending isn't the case too, since if it were really as good as a lot of people are saying there should be a massive increase in the number of high quality projects/products being developed. I don't mean to sound dismissive, but I really do feel like I'm going crazy here.


Replies

RealityVoidyesterday at 9:43 PM

You're not going crazy. That is what I see as well. But, I do think there is value in:

- driving the LLM instead of doing it yourself. - sometimes I just can't get the activation energy and the LLM is always ready to go so it gives me a kickstart

- doing things you normally don't know. I learned a lot of command like tools and trucks by seeing what Claude does. Doing short scripts for stuff is super useful. Of course, the catch here is if you don't know stuff you can't drive it very well. So you need to use the things in isolation.

- exploring alternative solutions. Stuff that by definition you don't know. Of course, some will not work, but it widens your horizon

- exploring unfamiliar codebases. It can ingest huge amounts of data so exploration will be faster. (But less comprehensive than if you do it yourself fully)

- maintaining change consistency. This I think it's just better than humans. If you have stuff you need to change at 2 or 3 places, you will probably forget. LLM's are better at keeping consistency at details (but not at big picture stuff, interestingly.)

show 3 replies
FeteCommunisteyesterday at 9:41 PM

There's got to be some quantity of astroturfing going on, given the players and the dollar amounts at stake.

show 1 reply
yusufnbyesterday at 10:33 PM

At my work I interview a lot of fresh grads and interns. I have been doing that consistently for last 4 years. During the interviews I always ask the candidates to show and tell, share their screen and talk about their projects and work at school and other internships.

Since last few months, I have seen a notable difference in the quality and extent of projects these students have been able to accomplish. Every project and website they show looks polished, most of those could be a full startup MVP pre AI days.

The bar has clearly been raised way high, very fast with AI.

show 1 reply
peabyesterday at 10:44 PM

> ... but also seemingly has nothing to show for it This x1000, I find it so ridiculous.

usually when someone hypes it up it's things like, "i have it text my gf good morning every day!!", or "it analyzed every single document on my computer and wrote me a poem!!"

mikenewyesterday at 9:40 PM

Pretty much every software engineer I've talked to sees it more or less like you do, with some amount of variance on exactly where you draw the line of "this is where the value prop of an LLM falls off". I think we're just awash in corporate propaganda and the output of social networks, and "it's good for certain things, mixed for others" is just not very memetic.

show 1 reply
kylecazaryesterday at 10:31 PM

Matches my experience pretty well. FWIW, this is the opinion that I hear most frequently in real life conversation. I only see the magical revelation takes online -- and I see a lot of them.

chrisjjyesterday at 10:28 PM

> if it were really as good as a lot of people are saying there should be a massive increase in the number of high quality projects/products being developed.

The headline gain is speed. Almost no-one's talking about quality - they're moving too fast to notice the lack.

LogicFailsMeyesterday at 10:08 PM

I find these agents incredibly useful for eliminating time spent on writing utility scripts for data analysis or data transformation. But... I like coding, getting relegated to being a manager 100%? Sounds like a prison to me not freedom.

That they are so good at the things I like to do the least and still terrible at the things at which I excel. That's just gravy.

But I guess this is in line with how most engineers transition to management sometime in their 30s.

hawkernewsyesterday at 10:58 PM

I remember when Anthropic was running their Built with Claude contest on reddit. The submissions were few and let's just say less than impressive. I use Claude Code and am very pro-AI in general, but the deeper you go, the more glaring the limitations become. I could write an essay about it, but I feel like there's no point in this day and age, where floods of slop in fractured echo chambers dominate.

daliusdyesterday at 10:07 PM

Maybe it is language specific? Maybe LLMs have a lot of good JavaScript/TypeScript samples for training and it works for those devs (e.g. me). I heard that Scala devs have problems with LLMs writing code too. I am puzzled by good devs not managing to get LLM work for them.

show 1 reply
philipwhiukyesterday at 9:49 PM

It's like CGP Grey hosting a productivity podcast despite his productivity almost certainly going down over time.

It's the appearance of productivity, not actual productivity.

show 1 reply
g-morkyesterday at 10:07 PM

> it's incredibly obvious that while these tools are surprisingly good at doing repetitive or locally-scoped tasks, they immediately fall apart when faced with the types of things that are actually difficult in software development and require non-trivial amounts of guidance and hand-holding to get things right

I used this line for a long time, but you could just as easily say the same thing for a typical engineer. It basically boils down to "Claude likes its tickets to be well thought out". I'm sure there is some size of project where its ability to navigate the codebase starts to break down, but I've fed it sizeable ones and so long as the scope is constrained it generally just works nowadays

show 1 reply