This sounds right to me:
> Before AI, both camps were doing the same thing every day. Writing code by hand. Using the same editors, the same languages, the same pull request workflows. The craft-lovers and the make-it-go people sat next to each other, shipped the same products, looked indistinguishable. The motivation behind the work was invisible because the process was identical.
Helps explain why some people are delighted to have AI write code for them while others are unhappy that the part they enjoyed so much has been greatly reduced.
Similar note from Kellan (a clear member of the make-it-go group) in https://laughingmeme.org/2026/02/09/code-has-always-been-the... :
> That feeling of loss though can be hard to understand emotionally for people my age who entered tech because we were addicted to feeling of agency it gave us. The web was objectively awful as a technology, and genuinely amazing, and nobody got into it because programming in Perl was somehow aesthetically delightful.
> nobody got into it because programming in Perl was somehow aesthetically delightful.
To this day I remember being delighted by Perl on a regular basis. I wasn't concerned with the aesthetics of it, though I was aware it was considered inscrutable and that I could read & write it filled me with pride. So yea, programming Perl was delightful.
> The web was objectively awful as a technology, and genuinely amazing, and nobody got into it because programming in Perl was somehow aesthetically delightful.
As an old school Perl coder, not true. Lots of people had a taste for Perl. TIMTOWTDI was sold as an actual advantage.
Perl caters to things almost nobody else does, like the way you have a negative "if" in "unless" and placing conditions after the code. So you can do things like:
frobnicate() unless ($skip_frobnicating);
Which is sure, identical function-wise to: if (!$skip_frobnicating) frobnicate();
But is arguably a bit nicer to read. The first way you're laying out the normal flow of the program first of all, and then tacking on a "we can skip this if we're in a rare special mode" afterwards. Used judiciously I do think there's a certain something in it.The bigger problem with Perl IMO is that it started as a great idea and didn't evolve far enough -- a bunch of things had to be tacked on, and everyone tacked on them slightly differently for no real benefit, resulting in codebases that can be terribly fragile for no good reason and no benefit.
Enjoying something and getting satisfaction out of it are two different things. I don't enjoy the act of coding. But I enjoy the feeling when I figure something out. I also think that having to solve novel puzzles as part of my job helps preserve my brain plasticity as I age. I'm not sure I'll get either of those from claude.
It's not a pure dichotomy though. I have always been both, and slowly mixing in agentic coding for work has left me some new headspace to do "trad" programming on side projects at home.
I love the exciting ideation phase, I love putting together the puzzle that makes the product work, and I also take pride in the craft.
I feel zero sense of sadness about how things used to be. I feel like the change that sucked the most was when software engineering went from something that nerds did because they were passionate about programming, to techbros who were just in it for the money. We lost the idealism of the web a long time ago and the current swamp with apex reptiles like Zuckerberg is what we have now. It became all about the bottom line a long time ago.
The two emotions I personally feel are fear and excitement. Fear that the machines will soon replace me. Excitement about the things I can build now and the opportunities I’m racing towards. I can’t say it’s the most enjoyable experience. The combo is hellish on sleep. But the excitement balances things out a bit.
Maybe I’d feel a sense of sadness if I didn’t feel such urgency to try and ride this tsunami instead of being totally swept away by it.
I think the argument is "a bit too nice," it isn't a binary, motivations are complicated and sometimes both feelings coexist.
If I reflect for a moment about why I personally got into tech, I can find at least a few different reasons:
- because I like solving problems. It's sad that the specific types of problems I used to do are gone, but it's exciting that there are new ones.
- because I like using my skills to help other people. It's sad that one specific way I could do that is now less effective, but it's exciting that I can use my knowledge in new ways.
- because I like doing something where I can personally make a difference. Again, it cuts both ways.
I'm sure most people would cite similar examples.
"The craft-lovers and the make-it-go people sat next to each other, shipped the same products, looked indistinguishable."
Definitely not. Based on my observations from a career as an open source and agency developer it was obvious at a glance which of these camps any given developer lived in by their code quality. With few exceptions make-it-go types tended to produce brittle, hacky, short-sighted work that had a tendency to produce as many or more problems than it solved, and on more than one occasion I've seen developers of this stripe lose commit access to FOSS projects as a result of the low quality of their contributions.
"nobody got into it because programming in Perl was somehow aesthetically delightful."
Compared to trying to get stuff accomplished in C Perl was an absolute dream to work with and many devs I knew gravitated to web development specifically for their love of the added flexibility and expressiveness that Perl gave them compared to other commonly used languages at the time. Fortunately for us all language design and tooling progressed.
The divide was never invisible and there has always been at least three camps.
The "make-it-go" people couldn't make anything go back then either. They build ridiculous unmaintainable code with or without AI. Since they are cowboys that don't know what they're doing, they play the blame game and kiss a ton of ass.
The "craft-lovers" got in the way just as much with their endless yak shaving. They now embrace AI because they were just using "craft" as an excuse for why they didn't know what they were doing. They might be slightly more productive now only because they can argue with themselves instead of the rest of the team.
The more experienced and pragmatic people have always been forced to pick up the slack. If they have a say, they will keep scope narrow for the other two groups so they don't cause much damage. Their use of AI is largely limited to google searches like it always was.
Eh, it also feels like a classic "maybe we somehow have enough perspective on this watershed moment while it's happening to explain it with a simplistic dichotomy". Even this piece interrogates the feeling of "loss" and teases out multiple aspects to it, but settles on a tl;dr of "yep, dichotomy". There's more axes here too, where that feeling can depend on what you're building, who you're building it with, time and position in your career, etc etc.
(I'll admit, though, that this also smells to me a bit too much like introvert/extrovert, or INTP/INTJ/etc so maybe I'm being reflexively rejective)
I think SWEs are genuinely pretty shocked and awed that codegen models can code at all, let alone code well. My guess is a lot of the agita around this is that people thought "I can code therefore I'm smart/special/etc." and then a machine comes by that can do pretty equivalent work and they're entirely unmoored. I sympathize with that, and I don't mean to dismiss it, but that's not what I feel. I really dislike this "doer vs. maker" binary stuff that comes up every now and again, as though everyone who thinks codegen models aren't perfect doesn't want to make anything. I really want to make things--good things--and I dislike the current hype wave behind codegen models because they often make it harder for me to make good things.
I've used Claude Code to build a few big things at work; I ask it questions ("where does this happen", "we have problem X, give me 3 potential causes", etc); I have it review things before I post PRs; our code review bot finds real heisenbugs. I have mixed success with all of this, but even so I find it overall useful. I'd be irritated if some place I worked, current or present, told me I couldn't use Claude Code or the like.
That said, I've not gotten it to be useful in:
- building entire, complex features in brownfield projects
- solving systemic bugs
- system design/evolution
- feature/product design and planning
- replacing senior engineer code review
It will confidently tell you it's done these things, but when you actually force yourself through the mental slog of reviewing its output, you'll realize it's failed (you also have to be an expert to perform this analysis). Now, maybe it fails in an acceptable way; maybe only slight revision is required; maybe it one-shots the change and verifying success isn't a big mental slog. Those are the good cases. More annoying are the times it fails totally and obviously, but the real nightmares are when it fails totally, yet imperceptibly. It also sometimes can do (some of) these things! But it's inconsistent, such that its successes largely serve to lower your guard against its failures.
And the mental slog is real. The artifacts you have to produce/review/ensure the model adheres to are ponderous. The code generated is ponderous. Code review is even more tedious because there's no human mind behind the code, so you can't build a mental model of the author. Getting a codegen model to revise its work or take a different approach is very hit or miss. Revising the code yourself requires reading thousands and thousands of lines of generated code--again with no human behind it--and building a mental model of what's happening before you can effectively work, and that process is time-consuming and exhausting.
I'm also concerned about the second-order effects. Because switching into the often-required deep mental focus is very difficult (borderline painful), I've seen many, many people reach for LLMs in those moments instead, first a little, then entirely. I've watched people copy/paste API docs into Gemini prompts to explain them. I've watched people unable to find syntax errors in code and paste it into ChatGPT to fix it. I'm confident I'm not the only person who's observed this, and it's a little maddening it's not getting more play.
---
I'm not saying SWEs don't fail in similar ways. I've approved--and authored--human PRs that had insidious flaws with real consequences. I've been asked to "review" PRs pre-ChatGPT that were 10x the size they needed to be. I've seen people plagiarize code, or just copy/paste Stack Overflow constantly. The difference is we build process around these risks, everything from coding patterns, PR size limits, type systems, firing people, borderline ludicrous amounts of unit tests, CI/CD, design docs, staging environments, red/green deploys, QA lists, etc.
I hate all of it! It's a constant reminder of my flaws and it slows down mean time to dopamine squirt of released code. I'd be the first person to give all this shit the axe. I would love to point Claude at the crushingly long list of PRs I have to review. But I can't, because it still has huge, huge flaws. Code review bots miss obvious problems, and they don't have enough context/knowledge about the system/bug/feature to perform a sufficiently comprehensive review. It would be a net time waste because we'd then have to fix a bug in prod or revise an already-deployed feature/fix--things I like even less than code review, if you can believe it.
These models cannot adequately replace humans in other parts of the SDLC. But, because pesky things like design and code review cap codegen models' velocity, our industry is "rethinking" it all, with no consideration of the models' flaws; "rethinking" here meaning "we're considering having an LLM handle all our code review, or not doing it at all". The only way to describe that is reckless disregard. It's unprofessional and unethical.
So, I think my grief isn't about "the craft". I don't think that's gone and I don't think I'd care if it were. My grief is about the humiliation of our profession, the annihilation of our standards and the betrayal of any representation we made to our users--indeed to ourselves. We deserve software systems that do what they say they do, and up until recently I really thought we were working hard to get there. I don't think that anymore; like many other things in our era (community, truth, curiosity, generosity, trust, learning, rationality, practice, compassion) it has retreated in the face of some flavor of self-interested, shallow grift. I really don't know how or why this happened, but regardless of the cause we truly are in a dark time.
> The web was objectively awful as a technology
I, for one, remember when I could crash Netscape Navigator by using CSS too hard (i.e. at all) or trying to make a thing move 10px with DHTML. But I kept trying to make browser to thing.
I strongly disagree. There's always been two camps ... on everything!
Emacs vs. vi. Command-line editor vs. IDE. IntelliJ vs. VS Code. I could do like twenty more of these: dev teams have always split on technology choices.
But, all of those were rational separations. Emacs and vi, IntelliJ and VS Code ... they're all viable options, so they boil down to subjective preference. By definition, anything subjective will vary between different humans.
What makes AI different to me is the fear. Nobody decided not to use emacs because they were afraid it was going to take their job ... but a huge portion of the anti-AI crowd is motivated by irrational fear, related to that concern.
I think the real divide is over quality and standards.
We all have different thresholds for what is acceptable, and our roles as engineers typically reflect that preference. I can grind on a single piece of code for hours, iterating over and over until I like the way it works, the parameter names, etc.
Other people do not see the value in that whatsoever, and something that works is good enough. We both are valuable in different ways.
Also, theres the pace of advancement of the models. Many people formed their opinions last year, and the landscape has changed a lot. There’s also some effort requires in honing your skill using them. The “default” output is average quality, but with some coaxing higher quality output is easily attained.
I’m happy people are skeptical though, there are a lot of things that do require deep thought, connecting ideas in new ways, etc., and LLMs aren’t good at that in my experience.