Notably, his essay “no silver bullet” states that there has never been a new technology or way of thinking or working that has led to a 10X increase in the speed of software development.
That was true for almost seventy years until roughly last year.
AI is the silver bullet - my output is genuinely 10X what it was before claude code existed.
I'm curious to check how faster AAA games will hit the market in the next years compared to the pre-LLM era. Or how much of the aging COBOL code base out there will disappear in the next decade.
When concrete things like that start to happen, then I will start to believe in the 10x claim.
The main point of mythical man month was that communication cost across people was the main cost as project grow in complexity.
So increasing individual output by itself is not enough to affect the argument. It could, if you also reduce the size of people needed for a project, where people are everyone included in the project, not just SWE. But there are strong forces in large orgs to pull toward larger project sizes: budgeting overhead and other similar large orgs optimize for legibility kind of arguments.
IMO the only way this will change is when new companies will challenge existing big guys. I think AI will help achieve this (e.g. agentic e-commerce challenging the existing players), but it will take time.
Oh spare me! Anyone can "output" 10x more code. Fred knew you could slap 10x more people on a team and "output" 10x more code.
What this article explains is why despite your feelings of untouchable success, on average the experience of using software just keeps getting worse and worse and worse, making this the worst era for software quality that I've ever lived through
Didn't we already do this with every company looking to hire "rockstar programmers"? I don't recall that that ended well.
That is far from proven, 'far' being the keyword here in another understanding.
At _this_ moment, AI is in the state of producing things - if you like with factor 10 or more. But what will come afterwards, when all this mush of code shall create _reliable_ results. This means not man month then, rather man years or decades to fix this billion and maybe trillions lines of opaque probabilistic LOC. You have to take the mean of these two stages, if nothing qualitative happens to the models.
Conversely, the value of software has dropped to 1/10 of what it was before Claude code existed.
I’m being glib, but there’s a whole class of software (eg simple crud apps) that just don’t have any marginal value anymore. So it doesn’t matter if it’s 10X faster or 100X faster. 100 x $0 is still 0.
So there's input (prompts/requests/tokens), then there's outputs (lines of code) and there's outcomes. How much have the outcomes improved? Not just yours, but I'm more curious on the outcomes with regards to the actual need your projects are solving.
First counterexample that comes to mind: Rails vs 90s networked/shared line-of-business crud app development was a 10x factor. It also enabled a lot of internal tools that wouldn't have been worth doing without it.
But after people's expectations adjusted it was just back on the treadmill.
I don't think we've found a new steady-state yet, but I have some gut feeling guesses about where it's going to be.
This was true as programming languages evolved too. It was so much easier to write scripting languages than C. You could crap out scripts like crazy - no cc refusing to give you a binary to get in your way.
Clearly..it still wasn't a silver bullet. Because output as a metric is a bad one. I thought it was only one managers valued..but apparently Anthropic has convinced devs to value it finally? i guess it def hits that dopamine receptor hard.
The disconnect here is a lack of proof that your increase in personal output actually increases the speed of software development. Considering: https://en.wikipedia.org/wiki/Ninety%E2%80%93ninety_rule as a joking, but true fact of how software projects go, does AI skip that 2nd 90%? Or do we add a whole new bottleneck of review and corrections, and still need to code that last 90%?
When I measure software dev, delivery of code isn't even a metric I care about. It is a key part of the process, to be sure, but I care about results - Did we ship? Did it work? Do we have happier customers and a smaller bug list?
In my experience, while I can answer "yes" to those questions on people who use AI assistance surgically, applying it where its strengths lie... I can answer an emphatic "No" for the teams I've worked with who are "AI-first", making the AI usage itself part of their goals.
10x the amount of code or features =/= 10x the speed of software development.
You’re describing output while the essay is discussing productivity.
If you’re 10x more productive, someone is willing to pay you 10x as much as they were last year, because you’re producing 10x as much value as before.
Has your salary increased 10x?
AI is certainly able to increase coding speed, especially for experienced engineers who can design the analytical parts themselves (data structures, interfaces, invariants, and process), but in large projects and/or organizations, queuing theory (especially as understood by lean development practitioners like Don Reinertsen) is going to be nasty.
Lean development theory teaches us that in a multi-workstream, multi-stage development process, developers should be kept at roughly 65-75% utilization. Otherwise, contra-intuitively, work queue lengths increase exponentially the closer utilization gets to 100%. The reason is that slack in the system absorbs and smooths perturbations and variability, which are inevitable.
Furthermore, underutilization is also highly comparable to stock market options: their value increases as variability increases. Slack enables quick pivots with less advance notice. It builds continuous responsiveness into the system. And as the Agile Manifesto tells us, excellent software development is more characterized by the ability to respond to change than the mere ability to follow a plan. Customers appreciate responsiveness from software vendors; it builds trust, which is increasing in value all the more with the rise of AI.
But AI-driven development threatens to increase, not decrease individual engineer utilization. More is expected, more is possible, and frankly, once you learn how to guardrail the AI and give it no trust to design well analytically, the speed a senior engineer can achieve while writing great code with AI assistance often feels intoxicating.
I think we're going to go through a whole new spat of hard, counterintuitive lessons similar to those many 1960s and 70s developers like Fred Brooks and his IBM team learned the hard way.
I've been thinking about this and have wanted to discuss it with people. I think the 10x thing has been broken, but I don't think it's because the premise of "No Silver Bullet" was false - I think it's because LLMs have the ability to navigate some of the _essential_ complexity of problems.
I don't think anyone has really wrestled with the implications of that yet - we've started talking about "deskilling" and "congnitive debt" but mostly in the context of "programmers are going to forget how to structure code - how to use the syntax of their languages, etc et etc)." I'm not worried about that as it's the same sort of thing we've seen for decades - compilers, higher-order languages, better abstracts, etc etc etc.
The fact that LLMs are able to wrestle with essential complexity means that using them is going to push us further and further from the actual problems we're trying to solve. Right now, it's the wrestling with problems that helps us understand what those problems are. As our organizations adopt LLMs that are able to take on _those_ problems - that is, customer problems, not problems of data, scaling, and so forth - will we hit a brick wall where we lose that understanding? Where we keep shipping stuff but it gets further and further from what our customers need? How do we avoid that?
For your sake I hope that your pay is determined by your “output”, and not your long-term usefulness.
> that has led to a 10X increase in the speed of software development.
> AI is the silver bullet - my output is genuinely 10X what it was before claude code existed.
Those are not the same.
You can add 5 different features to a project and still provide less value that the 5 lines diff that resolves a performance bottleneck.
Just because code has been put out does not mean the software is “developed”.
10x would only be possible if your output was low before Claude Code
If AI is the silver bullet, I do not understand why so many shot-up projects are still wandering around the freelance market.
The premise of "no silver bullet" is wrong (LLM just made it clear, but it has always been wrong).
The premise is that the software development had been mostly "essential complexity" rather than "accidental complexity." But I think anyone who worked as SE in the past decade would have found the opposite is true.
It's not only that software development is full of accidental complexity. Programmers (and the decision makers above them) have always been actively creating accidental complexity. Making a GUI program hasn't gotten easier since Visual Basic. In fact for each JavaScript framework and technique that wraps around DOM render engine, it has got harder over years. Until LLMs made it easier again (by creating a permanent dependency on LLMs. If you intend to edit the code manually afterwards, it became even harder!)
I haven't yet seen anyone with a concrete example project (public ideally, but even describing private efforts in enough detail to enable potential criticism would be fine) making a claim as strong as 10x. Are you willing to break the mould and show us what we're all missing?