My nightmare scenario (which might start to materilize) is that our last years in the industry will be becoming prompt monkies / agent "managers" working on codebases we barely understand in such velocity there's no way we can gain real understanding. Whenever something breaks (and it will , a lot) A.I will fix it - or so we'll hope. And the sad thing is - this might work; you'll get more stuff done with fewer people. Sure, we didn't sign up for this, it's not a fun job what I've described, but why should management care? They have their own problems and A.I is threatening their jobs as well.
It already happened. The old timers correctly observe that modern applications are bloated and inefficient because of all the heavyweight frameworks, excessive abstraction layers, and "left-pad culture" where external dependencies are pulled in to do the most trivial things but that these things enabled less capable developers to effectively build software to fulfill industry demand. LLM-only coders are just the next step in the devolution.
That's literally what https://news.ycombinator.com/item?id=47157039 seems to be about, and I had the same reaction as you
> My nightmare scenario (which might start to materilize) is that our last years in the industry will be becoming prompt monkies / agent "managers" working on codebases we barely understand in such velocity there's no way we can gain real understanding.
It will always be preferable to work on an understandable codebase, because that maximizes the AI's affordances too. And then the AI can explain things to you. A skilled human will always have a lot of solid knowledge relating to their hyper-specific niche that isn't part of your average general purpose AI, so humans will obviously have a key role to play still.
I'm already seeing this in the company I recently joined: 80-90% of code is generated/prompted. Big PRs, very little review or oversight. Absolutely nobody considering long-term architecture (and IMO nobody capable of such). In general, there's very little critical thinking involved at any stage, just throw error messages back into the LLM, rinse and repeat. I'm hoping there's a world where people with skills are useful in getting these projects back on track, but perhaps as a society we're learning to accept this reduction in quality.
At work we build enterprise software with stuff like Kotlin+Spring + multiple NextJS apps + Microservices + Rust CAD engine.
I haven’t have written code aside from tweaking stuff here and there in probably 3 or 4 months. Before that I wrote code by hand every day for many years.
I’ve found a lot of fun parts of my new workflow that I enjoy. I still miss being fully immersed in a problem deep in the files… and sometimes it feels like homework reading so many implementation summaries from Claude because the feature spans 4 repos and is too much code to read. But I do love shaping the code into different solutions exploring in a way that is unique to ai native workflows. And I love building agent skills and frameworks with/around them and expanding it out to more aspects of the company or life — there’s deep work to be had that still feels like hacking in the trenches. I get a lot of the same satisfaction in different ways, and there’s a lot of exciting novelty to explore that was previously out of reach due to time and energy constraints.
Also I don’t like our backend stack and I hate React / NextJS to the degree of derangement syndrome — I am so happy that I don’t have to write it and I can just focus on UX, making customers happy / lives easier / shaping the software into better and better versions of itself at such a faster pace.
People who learned good software engineering intimately before the inflection point are extremely lucky right now. Existential dread and the stages of grief have been a part of the journey for me too sadly, but there’s a lot to celebrate and explore with the right attitude.