As a /senior/ developer I really dislike blanket statements. I've seen the same amount of failures caused by
> “Do we really need that?” > “What happens if we don’t do this?” > “Can we make do for now? Maybe come back to this later when it becomes more important?”
as with experimenters. Every system is different, every product is different. If I were building firmware for a CT scanner, my approach towards trying out new things would be different than a CRUD SaaS with 100 clients in a field that could benefit from a fresh perspective.
There are definitely ways to have eager/very open seniors drive systems into hard to get out corners. But then there are people that claim PHP5 is all you need.
> They want to avoid development as much as they can.
One of my favorite .sigs was:
I hate code, and want as little of it as possible in my software.
I don't remember where I saw it, but it was a while ago. It's possible the author has an HN account.One of the things that happens to "avoiders," is that they get attacked for being "negative." It can get career-ending, when the management chain is the "Move fast and break things" type.
I just stopped offering suggestions, after encountering that crap a few times, and learned to just quietly make preparations for when the wheels fall off.
I have spent my entire adult life, shipping, and shipping means lots of "not-shiny," boring stuff. But it gets onto shelves, and into end-users' hands. I was originally trained in hardware development, where mistakes can't be fixed with an OTA update. It taught me to "play the tape through," and make sure that I do a good job on every part of the project; which includes a lot of anticipating problems, and designing mitigations and prevention.
Most proof of concepts I've seen get traction turned into production.
A rewrite?
I recall a few times everyone promised, if this gets promoted then we will rewrite it from zero. Never happened.
The article touches on responsability, accountability. There is none for risk taker. By definition. You have a crazy idea, you rush it out, you hope clients bite. You profit. It's not even your problem how to make it work, scale, not cost more to run than we sell it for.
The loop on the right. There are companies, two of them are very popular these days, they took it to an extreme. You ship something fast, and since it only scales linearly you go raise money. Successful companies, countless users, some of them even pay. Who's to blame? The senior developer, or simply someone reasonable who asks, how's that sustainable, what's the way out of this? Those are fired, so whoever's left is a believer.
What I found is that my willingness to communicate and share my expertise is usually not in demand with more junior developers. In general, I find developers uninterested in finding a mentor. They don't look at your linked in profile, they don't look at you as a possible source of knowledge and expertise.
So it's not like I have nothing to share after 30 years of experience in the industry, I just have nobody to share it with.
It's funny, I've been literally trying to convey these exact sentiments to my team over the last few days down to the:
> Need to build a whole new feature to test it? Have you tried putting a button in the existing UI and seeing if people click it?
Pretty much word for word.It feels like engineers are collectively feeling the pain now that product has decided that engagement of mental faculties is no longer necessary on their behalf; just build it and figure out the user persona and utility later...if ever. What used to be a process of taking the time to understand the domain, the user, and how the product fits into some process has been tossed out the window; just ship whatever we think some imaginary user wants and experiment until we succeed.
It creates the exact problem that OP talks about: every random feature that gets vibe-coded becomes a source of instability and risk; something that can then only be maintained via more vibe coding because no one has a working mental model of the thing.
This misses the basic problem of incentives. What "the company" wants doesn't matter, it's what the people making particular decisions want.
There exist people who's jobs depend entirely on rolling out new features, or apps of some sort, and having them show up in some form of company metric. If the senior developers says it's a bad idea, those people won't listen, or won't care. Their job is on the line.
I'm trying to avoid a snarky comment like "oh of course it's a senior dev's fault again", so I'll tell a story.
When I started around 20 years ago, my junior dev experience was pretty harsh - I was taught, not always in a correct or respectful manner, to do this and not to do that. Overall though, it was absolutely useful and formative. Senior engineers are rarely abusers, they communicate real issues, better or worse, and it was on me to figure out why and how to work the right way. Also we were raised in a pretty receptive attitude to the "old" technology - from Tcl and Smalltalk to Ada, Perl, etc. It was admired classics rather than just old shit.
Surprisingly, this didn't translate too well to my experience when I found myself in a mentoring position. Starting from 2015 maybe the situation changed. Newer generation of devs felt much more entitled to social games, higher salaries and opinions rather than authentic engineering interest and therefore my experience.
No amount of structured communication would change that, even the cold pressure of production failures and very specific poor management feedback normally doesn't work. They're also more lenient to prod screw-ups, and often use "everyone can make a mistake" excuse for excusing even more mistakes. The thing is, most of them don't want to hear for any reason.
As many of my peers I learned humility and accepted that as is, only using my advantage in expertise when it comes directly to my responsibility territory, and to avoid a hassle imposed by my eager younger teammates, like I usually parse prod logs and settings with command line while the younger guys trying to push through loki/grafana query limitations.
I'm fine and safe, and my job is no less secure, I guess, because someone has to fix bugs etc. The companies less so, but as long as they don't care why would I.
It will be interesting to see this generation wiped off by the next one. I guess they shouldn't be in a very good shape, because the foundation they built upon (namely quickly changing libraries and language supersets like React/TypeScript/some JVM flavour/and I hope Kafka) will be replaced by the next tech fashion.
A really competent senior figures out what the prevailing culture of the company is now, and what it will need to be in 5 years, and adapts as they go. Startups with 5 people maybe don't need extra complexity costing runway. A 500 person business may need that complexity because now there are second-order effects that need to be mitigated for every business decision. It's not a black-and-white "always avoid complexity" it's "add complexity when it makes sense" and even that question has a lot of nuance because sometimes the business just needs to survive for another couple of months.
Complexity, if it can be reduced to a single measurable dimension, is only one of several factors in a solution space.
There are other properties such as, maintainability, scalability, reliability, resilience, anti-fragility, extensibility, versatility, durability, composability. Not all apply.
Being able to talk about tradeoffs in terms of solution spaces, not just along a single dimension, is one of what I consider the differentiator between a senior and staff+ developer.
This is an excellent article. Thought provoking and Ill rememner the 2 loops forever on.
> What if we had one system just for speed?
Like a beta? It would take incredible discipline from the business and customers not to consider that production software and demand 99.99 uptime and bug free.
> The avoider, the reducer, the recycler.
As this kind of person, it can be alienating in some teams / companies.
What I've found works best is to convey how the added complexity will affect non-engineers. You have to understand the incentives and trade offs though, and sometimes it's better to take the loss.
If you have the fortune of sticking around with the same leaders for awhile, a few rounds of being vocal, but compromising, will work in your favor. When that complexity comes back around to bite them in the way you described, you will earn some trust.
In my experience the solution proposed will rarely result in a less complex solution. Quick MVPs have the tendency to stick around. As soon as a customer starts using some product or feature, the cost of pivoting goes up. If you wish to experiment, do it on a segment.
The article is all about technical communication — diagrams, architectural discussions, code snippets. The more difficult piece to communicate is product sense: which user feedback indicates a genuine trend, when a feature request is a workaround for an underlying issue vs. the issue itself.
It’s not difficult for seasoned engineers with deep technical backgrounds to whiteboard a distributed system in twenty minutes. It takes hundreds of customer discussions, invalid hypotheses, and years of experience building judgment about whether this is the right solution at the right time.
The engineers who compound quickly have usually built their skills in both areas concurrently. Communication of the latter is more challenging due to the judgment-based foundation beneath it.
Even with AI, there is a clear difference between juniors and seniors.
None of the things I can think of have anything to do with avoiding problems.
To some degree, having 5+ agents working on different projects is similar to leading a team of 5+ people. The skills translate well.
The senior is also able to understand what the agents do, review and challenge it. Juniors often can't.
And finally, the senior has a deeper understanding of what the business and problem domain are, and can therefore guide the AI more effectively towards building the right thing.
I’m curious about this scale vs speed distinction.
Every codebase includes parts that are more experimental, and parts that are more core. My sense is that AI can help on both of these fronts (I.e building rapid prototypes on the fringes and hardening the core with better test coverage).
I really dislike the "ah this is my favorite senior" language. The author would have done well to simply leave this "rating" of different kinds of people out, and it would not harm the article. In fact, it would improve it.
People don't want to be judged in the introduction of an article, based on how they like to approach their literal dayjob. It's a weird jab.
I found that the proposers of features "want everything" because they don't know what is critical - they're therefore totally unwilling to accept anything other than "the full monty". So as a senior developer you cannot propose any faster route.
As you might imagine, a lot of these ideas fell by the wayside but we had to develop them in full.
“I found this new tool and it’s pretty cool ...”
yup “This company <company totally unlike the one we’re in> does things this way, so …”
agreed “Here, look at this HackerNews post that says this is best practice, we should probably …”
sir/m'lady, we're at war from now on. This is the only reason I come here. Of course I don't take everything carelessly, but the amount of experts on this forum is damn high and this is the only forum in the last 10 years that helped me grow so much> I don’t like senior developers who like trying new technology. I like ones that avoid more complexity.
I guess the author has never worked on a dog shit system with no tests at all and constant downtime.
I have worked with “complexity averse” engineers who would rather fix the edges over and over again, than roll up their sleeves and just get the job done.
I just don’t believe that using new tools is at odds with avoiding complexity.
Sometimes you have to take it to the chin, and get to use the new shiny thing along the way to move much faster.
I may be missing something, but the "left" and "right" loops strike me as slightly different words for the same exact thing.
The company provides (offer | service) to the (market | user) and receives (feedback | payment).
The service IS the offer, the userbase IS the market, and payment IS the feedback signal.
Right?
EDIT - expanded on original comment to add:
The author's point might be lost on me but seems to be that framing things with one of those sets of labels vs the other may correspond to use of "complexity" vs "uncertainty" as the element targeted for reduction, and choosing those labels carefully in turn correlates to "senior" devs' persuasiveness in prioritization battles with product owners. To which my response would be, "maybe?". (shrug)
I'm not a copywriter by trade but I care about words and may have just been nerd-sniped.
>We could call this the ‘Speed’ version of the system. It’s not meant to be understandable, the goal is getting things good enough to take it to the market for feedback.
AI is actually quite awful for prototyping, because it makes it far too easy to add random crap to your "prototype" without any specific intention. This quickly transforms the prototyping process from something that's high-level and geared towards building the mental model of the real system into something akin to copy-editing a random piece of software without any coherent mental model involved. Moreover, prompting allows to to glaze over some essential complexity of the task without getting any notions of the scope of the effort of actually doing it. In other words, people end up failing to make necessary decisions and simultaneously get bogged down with unnecessary ones.
In short, fast feedback loops are only useful if there is actual feedback involved.
While I agree with adding code contributing to complexity is problematic there is lots of code in existing code basis which is overly complex due to past outdated requirements or less than perfect human coders. The current flood of AI driven security fixes demonstrates that AI can be pretty good in detecting security edge cases. It is not inconceivable to use it to also reduce code complexity.
Hits home for me; although a lot of times adding complexity is not about your opinion as a senior developer but rather what the business wants. I've definitely worked jobs where I helped create microservice kubernetes nightmares, and while this was partially my fault for wanting to play with shiny things, a lot of this was just "this is what the business wants and you have the expertise to do it", and I'd kinda shrug and go OK. I worked one job (small business) where an executive once leveled with me that the reason they wanted the complexity is because it looked good to investors, not because it was an actual need.
FWIW though the idea about a "speed" product and a "stability" product isn't new. We used to call it "prototyping". I don't know when/how that disappeared from the collective consciousness. "have a space where we can build things fast with horrible practices" isn't some AI era innovation, it's what smart companies have done for decades.
> Senior developers care a lot about stability
I saw this yesterday
https://trinkle23897.github.io/learning-beyond-gradients/
They are very remotely related yet somehow very close.
One could say in order to be a senior developer in any area, more-than-good- communication skills are required.
> I don't like the kind of senior developer that says "I found this new tool and it’s pretty cool ..."
Remember that the first half of this statement, the part listed here, is great. I love playing with new tools.
The only bad part is the implicit bit after the dots: "we should use this in our product." You don't want cool things anywhere near your product, unless the cool thing is that they remove complexity.
Good read. The big elephant in the room though: you likely won't purely hand-code the Stable version for much longer. So where's that split? Prototype vs. Prod? Feature Flags? Canary? 2 codebase nightmare? All of this already exists.
The message that hits for me is that of AI being a destabilizer while simultaneously being an accelerator. The Speed/Scale suggestion won't address this. A codebase no one understands, growing at machine speed won't go away just because you drew a box around it. The fix is likely more mundane stuff like process and role shifts, smaller PRs, tests, tooling, ownership principles.
I feel like I was totally on board until the conclusion about one fast system and one stable one. It's not really possible in practice, once a customer starts paying for something, even a vibe coded app by a sales person, it's now a stable system.
The thing breaks, the salesperson says "can you check this out?" then disappears and we're back to where we started.
I don't even find this very new: many companies I've been at have tried to spin-off a "fast" team to sell stuff.
It sounds like a perfect idea on paper until you notice that junior devs will not be able to learn about stable code. Unless AI get's good enough to write stable code, or good enough that no human has to look in the code, the next generation will face a bigger problem than now. Well it's AI that started it so let's make AI take responsibility... Oh they can't. Now what?
I partly agree. Agents are not going to replace senior devs. Exactly for the internal context and the decision making that comes with it.
But senior devs are also expected to have a compounding effect even pre-AI. Writing a single doc, refactoring legacy code to make it extensible, building security frameworks specific to the project and many more. All of these would compound the dev team.
I think the same will happen with agents working on a org specific paved path set by senior devs.
I think it's possible that this idea would work as a communication/branding strategy for senior developers, though I don't think it's strictly true.
I am really skeptical of arguments based around "I can do things the model can't" because that space of things is not very large and is getting smaller every day.
The opportunity to not merely cling on to what we have another year but to grow is to say "together, the model can manage so much more complexity than before that we can do things that were not previously possible."
We haven't identified too many of those things yet, but I am certain they are coming.
I tripped over the double-entendre of the teaser quote and then found it ironic that the author is a copy writer.
>> “AI agents are the future of software development. We won’t need developers anymore to slow down the progress of a business.”
> And so, to me, a copywriter, what’s happening here is that the same message is meaning two different things to two different audiences.
I couldn't tell whether to parse this as "We will be faster without those slow developers", or more cynically as "We don't need developers to slow us down; We can now be slow with ai agents". I suspect that with creeping complexity the latter reading will hold up better for large projects.
The polarization of speed vs scale concern on team is interesting.
Maps to what we believe on our team - functional vs non-functional. AI ships functional features fast but developers are more important than ever in making sure the non-functional aspects are taken care of
I stopped communicating my experience-derived lessons when I discovered that 1. it cheapened the perception of "my genius", and 2. nobody wants to hear it anyway. From non-tech workers for whom I'd write a bat or bash script for, to engineers for whom I'd debug a complex race condition - they all just want the answer and care nothing about how I got it.
Fine, then, I'll keep the experience to myself.
The safest answer a sales person can give is "yes".
The safest answer an engineer can give is "no".
> this is my senior developer. The avoider, the reducer, the recycler. They want to avoid development as much as they can
And push an insurmountable pile of technical debt onto the successor.
Well, yeah, I understand the idea and I'm all for it: the less code the better, the less changes the better.
However in certain industries it is no longer a right approach for the job. In modern frontend development if you did not update your codebase for like a couple of months, this codebase falls so much behind that it becomes way more expensive to push an upgrade as compared to daily minor updates of packages. Yeah, I hate this as much as you do, but this is the pace frontend is moving at, and if you don't follow, you will mount technical debt.
Interesting article. I appreciate the range of perspectives here, and the overall pitch to keep the most experienced in frame along side new-fangled advancements (AI).
The "speed" loop reminds me a lot of RAD. In fact, AI might be _the_ thing that helps us deliver on RAD's promises from decades ago.
https://www.geeksforgeeks.org/software-engineering/software-...
Speed… speed… velocity… speed. All I hear about these days. Every meeting.
Honest question does high velocity / first mover ever really pay off these days?
I don't feel like having the first AI slop to the market has actually paid off for anyone? Am I wrong? Am I missing something? Am I out of touch?
The way I see it, first movers do a lot of work proving the idea works, and everyone else swoops in with better product or at least at a cheaper rate.
Beyond that, let's take the company I work for, for example. We have an ingrained and actually relatively happy customer base on a subscription model. I feel like the only thing increased velocity can do is rapidly ruin their experience.
I enjoyed reading this, and I agree with the underlying message: communicating better with our audience.
I think the framing started in the right path and then took a slightly wrong turn.
Both loops presented benefit from being tighter, faster. One to take a system to a “stable” (maintainable) setpoint quickly. The other to handle uncertainty.
And the additional insight about splitting the systems to better adapt to AI… we’ve described spikes for years, well before AI went mainstream.
I enjoyed reading this, and I agree with the underlying message: communicating better with our audience.
I think the framing started in the right path and then took a slightly wrong turn.
Both loops presented benefit from being tighter, faster. One to take a system to a “stable” (maintainable) setpoint quickly. The other to handle uncertainty.
And the additional insight about splitting the systems to better adapt to AI… we’ve described spikes for years, well before AI went mainstream
They fail to communicate in the same way we fail to download a copy of "the truths of the world as we know it" into every child's brain. It's easily to say " look both ways when you cross the road" but speech is so one dimensional. It's a slow tape reel and that's just the encoding.
I actually think the article makes some pretty interesting points. It's not about the name of it though.
It seems to me that the author fails to extrapolate on the effects of recursive self improvement. The only things preventing 95% engineer obsolescence will be compute/energy constraints and the speed of adoption, which can take years for large infrastructure companies. But it's coming.
I agree with the author's premise - that one feedback loop optimizes for speed, and the other for scale - but I don't think the market is bearing the conclusion - that AI should be utilized to enable more rapid experimentation, where we better scale what works.
Many vendors seem to be learning (or not learning, but just throwing their weight against it anyway) that adding hastily-generated AI features are causing customer dissatisfaction, as more people brand the features "slop".
In the best case, the users give the company more chances. Infinitely more chances.
In a worse case, the users assume the new feature will always be bad, given their first impression. It's hard for a vendor to make people reconsider a first impression.
The absolute worst case is that AI enables a new market, but the first attempts are so poor that the first movers make people write that market off as a dead end, leading to a lost opportunity.
There's a lot of opportunity in being the manager who can still see it
I think that if this becomes an actual problem, there will be such a massive incentive to add AI to the scale/compression/risk avoidance side that there will be automated tools specialized in that kind of work.
I feel like this is shooting from the hip from a single point of view from some semi-large corpo.
Depends on the product, but in many cases you cant actually decouple the complexity because the complexity is the product. There are times where the archaic flow needs to work for some stupid compliance reason.
Because the most important parts of the expertise are coming from their internal "world model" and are inseparable from it.
An average unaware person believes that anything can be put in words and once the words are said, they mean to reader what the sayer meant, and the only difficulty could come from not knowing the words or mistaking ambiguities. The request to take a dev and "communicate" their expertise to another is based on this belief. And because this belief is wrong, the attempt to communicate expertise never fully succeeds.
Factual knowledge can be transferred via words well, that's why there is always at least partial success at communicating expertise. But solidified interconnected world model of what all your knowledge adds up to, cannot. AI can blow you out of the water at knowing more facts, but it doesn't yet utilize it in a way that allows surprisingly often having surprisingly correct insights into what more knowledge probably is. That mysterious ability to be right more often is coming out of "world model", that is what "expertise" is. That part cannot be communicated, one can only help others acquire the same expertise.
Communicating expertise is a hint where to go and what to learn, the reader still needs to put effort to internalize it and they need to have the right project that provides the opportunity to learn what needs to be learnt. It is not an act of transfer.