logoalt Hacker News

dazhbogyesterday at 9:52 PM1 replyview on HN

I dont get the hype.. And I dont think we will reach peak AI coding performance any time soon.

Yes, watching an LLM spit out lots of code is for sure mesmerizing. Small tasks usually work ok, code kinda compiles, so for some scenarios it can work out.. BUT anyone serious about software development can see how piece of CRAP the code is.

LLMs are great tools overall, great to bounce ideas, great to get shit done. If you have a side project and no time, awesome.. If your boss/company has a shitty culture and you just want to get the task done, great. Got a mundane coding task, hate coding, or your code wont run in a critical environment? please, LLM that shit over 9000..

Remember though, an LLM is just a predictor, a noisy, glorified text predictor. Only when AI reaches a point of not optimizing for short term gains and has built-in long term memory architecture (similar to humans) AND can produce some linux kernel level code and size, then we can talk..


Replies

cmiles74yesterday at 10:08 PM

I have junior people on my team using Cursor and Claude, it’s not all great. Several times they’ve checked in code that also makes small yet breaking changes to queries. I have to watch out for random (unused) annotations in Java projects and then explain why the tools are wrong. The Copilot bot we use on GitHub slows down PR reviews by recommending changes that look reasonable yet either don’t really work or negatively impact performance.

Overall, I’d say AI tooling has maybe close to doubled the time I spend on PR reviews. More knowledgeable developers do better with these tools but they also fall for the toolings false confidence from time to time.

I worry people are spending less time reading documentation or stepping through code to see how it works out of fear that “other people” are more productive.