logoalt Hacker News

sho_hnyesterday at 3:46 PM13 repliesview on HN

My advice to everyone feeling existential vertigo over these tools is to remain confident and trust in yourself. If you were a smart dev before AI, chances are you will remain a smart dev with AI.

My experience so far is that to a first approximation, the quality of the code/software generated with AI corresponds to the quality of the developer using the AI tool surprisingly well. An inexperienced, bad dev will still generate a sub-par result while a great dev can produce great results.

The choices involved in using these tools are also not as binary as they are often made out to be, especially since agents have taken off. You can very much still decide to dedicate part of your day to chiseling away at important code to make it just right and make sure your brain is engaged in the result and exploring and growing with the problem at hand, while feeding background queues of agents with other tasks.

I would in fact say the biggest challenge of the AI tool revolution in terms of what to adapt to is just good ol' personal time management.


Replies

bigstrat2003yesterday at 4:23 PM

> If you were a smart dev before AI, chances are you will remain a smart dev with AI.

I don't think that's what people are upset about, or at least it's not for me. For me it's that writing code is really enjoyable, and delegating it to AI is hell on earth.

show 11 replies
akdev1lyesterday at 8:15 PM

I’m not worried about being a good dev or not but these AI things thoroughly take away from the thing I enjoy doing to the point I’d consider leaving the industry entirely

I don’t want to wrangle LLMs into hallucinating correct things or whatever, I don’t find that enjoyable at all

show 1 reply
nprzyesterday at 7:14 PM

I think there is more existential fear that is left unaddressed.

Most commenters in this thread seem to be under the impression that where the agents are right now is where they will be for a while, but will they? And for how long?

$660 billion is expected to be spent on AI infrastructure this year. If the AI agents are already pretty good, what will the models trained in these facilities be capable of?

bambaxyesterday at 5:26 PM

Yes, absolutely. I think the companies that don't understand software, don't value software and that think that all tech is fundamentally equivalent, and who will therefore always choose the cheaper option, and fire all their good people, will eventually fail.

And I think AI is in fact a great opportunity for good devs to produce good software much faster.

flatlineyesterday at 9:16 PM

I think it represents a bigger threat than you realize. I can't use an AI for my day job to implement these multi-agent workflows I see. They are all controlled by another company with little or no privacy guarantees. I can run quantized (even more braindead) models locally but my work will be 3-5 years behind the SOTA, and when the SOTA is evolving faster than that timeline there's a problem. At some point there's going to be turnover - like a lake in winter - where AI companies effectively control the development lifecycle end-to-end.

OptionOfTyesterday at 5:37 PM

I think the issue is that given the speed the bad dev can generate sub-par results that at face value look good enough overwhelm any procedures in place.

Pair that with management telling us to go with AI to go as fast as possible means that there is very little time to do course correction.

icedchaiyesterday at 5:38 PM

I agree with the quality comments. The problem with AI coding isn't so much the slop, it's the developers not realizing its slop and trying to pass it off as a working product in code reviews. Some of the stuff I've reviewed in the past 6 months has been a real eye opener.

ex-aws-dudeyesterday at 4:50 PM

I think no one is better positioned to use these tools than experienced developers.

mannanjyesterday at 5:07 PM

For me the problem is simple: we are in an active prisoner's dilemma with AI adoption where the outcome is worse collectively by not asking the right questions for optimal human results, we are defecting and using ai selfishly because we are rewarded by it. There's lots of potential for our use to be turned against us as we train these models for companies that have no commitment to give to the common good or return money to us or to common welfare if our jobs are disrupted and an AI replaces us fully.

bmitcyesterday at 7:38 PM

> My advice to everyone feeling existential vertigo over these tools is to remain confident and trust in yourself.

I do try to do that and have convinced myself that nothing has really changed in terms of what is important and that is systems thinking. But it's just one more barrier to convincing people that systems thinking is important, and it's all just exhausting.

Besides perhaps my paycheck, I have nothing but envy for people who get to work with their hands _and_ minds in their daily work. Modern engineering is just such a slog. No one understands how anything works nor even really wants to. I liken my typical day in software to a woodworker who has to rebuild his workshop everyday to just be able to do the actual woodworker. The amount of time I spend in software merely to being able to "open the door to my workshop" is astounding.

kgwxdyesterday at 8:16 PM

One thing I'm hoping will come out of this is the retiring of coders that always turn what should be a basic CRUD app (just about everything) into some novelty project trying to pre-solve every possible concern that could ever come up, and/or a no-code solution that will never actually get used by a non-developer and frustrate every developer that is forced to use it.

tehjokeryesterday at 7:47 PM

It's a combination of things... it's not just that AI feels like it is stripping the dignity of the human spirit in some ways, but it's also that the work we are doing is often detrimental to our fellow man. So learning to work with AI to do that faster (!!) (if it is actually faster on average), feels like doubling down.