Fortunate to be reminded of this right now, especially the pull-quote about conceptual integrity.
This is the reason why AI-assisted programming has not turned out to be the silver bullet we have been hoping for, at least yet. Muddled prompting by humans gets you the Homer Simpson car you wished for, that will eventually collapse under its own weight.
I've been thinking a lot about Programming as Theory Building [0] as the missing piece in AI-assisted engineering. Perhaps there are approaches which naturally focus on the essence while ignoring the accidents, but I'm still looking for them. Right now the state of the art I see ignores both accident and essence alike, and degrades the ability to make progress.
Please inform me if there are any approaches you know that work! And lest this sound pessimistic, far from it. This state of affairs is actually intoxicatingly motivating. Feels like we have found silver, and just need to start learning to mould bullets.
Notably, his essay “no silver bullet” states that there has never been a new technology or way of thinking or working that has led to a 10X increase in the speed of software development.
That was true for almost seventy years until roughly last year.
AI is the silver bullet - my output is genuinely 10X what it was before claude code existed.
This was true as programming languages evolved too. It was so much easier to write scripting languages than C. You could crap our scripts like crazy - no cc refusing to give you a binary to get in your way.
Clearly..it still wasn't a silver bullet. Because output as a metric is a bad one. I thought it was only one managers valued..but apparently Anthropic has convinced devs to value it finally? i guess it def hits that dopamine receptor hard.
I'm curious to check how faster AAA games will hit the market in the next years compared to the pre-LLM era. Or how much of the aging COBOL code base out there will disappear in the next decade.
When concrete things like that start to happen, then I will start to believe in the 10x claim.
Writing code is a part (sometimes a big part, sometimes not) of delivering software to production. The overall system throughput is the interesting thing to look at.
I've been thinking about this and have wanted to discuss it with people.
I think the 10x thing has been broken, but I don't think it's because the premise of "No Silver Bullet" was false - I think it's because LLMs have the ability to navigate some of the _essential_ complexity of problems.
I don't think anyone has really wrestled with the implications of that yet - we've started talking about "deskilling" and "congnitive debt" but mostly in the context of "programmers are going to forget how to structure code - how to use the syntax of their languages, etc et etc)." I'm not worried about that as it's the same sort of thing we've seen for decades - compilers, higher-order languages, better abstracts, etc etc etc.
The fact that LLMs are able to wrestle with essential complexity means that using them is going to push us further and further from the actual problems we're trying to solve. Right now, it's the wrestling with problems that helps us understand what those problems are. As our organizations adopt LLMs that are able to take on _those_ problems - that is, customer problems, not problems of data, scaling, and so forth - will we hit a brick wall where we lose that understanding? Where we keep shipping stuff but it gets further and further from what our customers need? How do we avoid that?
The premise of "no silver bullet" is wrong (LLM just made it clear, but it has always been wrong).
The premise is that the software development had been mostly "essential complexity" rather than "accidental complexity." But I think anyone who worked as SE in the past decade would have found the opposite is true.
"claude, connect to a k8s pod in prod and grab a 30s cpu profile, analyze and create a performance test locally for the top outlier, verify your fix and create a PR"
As a software engineering manager, I always look to staff up a project at the beginning as much as possible, looking for doing as much in parallel up-front as we can. If some things take longer than expected, then I already have a team of engineers with all the context since the project kicked off that can help each other with any longer running tasks. An engineer that has completed a smaller chunk of work can help out with the items on the critical path, for example.
The bearing of a child takes nine months, no matter how many women are assigned.
For the human makers of things, the incompletenesses and inconsistencies of our ideas become clear only during implementation.
Conceptual integrity is the most important consideration in system design.
There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude improvement in productivity.
---
These ideas still apply very well to modern society.
but,
Personally, I hope science advances to the point where nine women really can have a baby in parallel.
We may need that to prevent demographic collapse and keep the pension system from running out of money.
It would probably be more practical to make old age less expensive than to inject more people into the bottom of the demographic pyramid. Those young people eventually get old too. I am looking forward to my sentient robot caretaker:
"The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination." -FB
Indeed a lot of things have changed. A worthwhile exercise is to read the book, contemplate how things have changed, and try to map lessons from the book onto modern technology and organizational practices. A LOT of the core principles are still relevant IMO, even if many of the implementation details are not.
This is the reason why AI-assisted programming has not turned out to be the silver bullet we have been hoping for, at least yet. Muddled prompting by humans gets you the Homer Simpson car you wished for, that will eventually collapse under its own weight.
I've been thinking a lot about Programming as Theory Building [0] as the missing piece in AI-assisted engineering. Perhaps there are approaches which naturally focus on the essence while ignoring the accidents, but I'm still looking for them. Right now the state of the art I see ignores both accident and essence alike, and degrades the ability to make progress.
Please inform me if there are any approaches you know that work! And lest this sound pessimistic, far from it. This state of affairs is actually intoxicatingly motivating. Feels like we have found silver, and just need to start learning to mould bullets.
[0] Another classic required reading of the industry https://pages.cs.wisc.edu/~remzi/Naur.pdf
That was true for almost seventy years until roughly last year.
AI is the silver bullet - my output is genuinely 10X what it was before claude code existed.
Clearly..it still wasn't a silver bullet. Because output as a metric is a bad one. I thought it was only one managers valued..but apparently Anthropic has convinced devs to value it finally? i guess it def hits that dopamine receptor hard.
When concrete things like that start to happen, then I will start to believe in the 10x claim.
there are entire C corps of monkeys out there
Also, I know that there will be a lot of boilerplate applications that just don't look good or seem to have been well thought out early on.
Folks will use that as a cope mechanism, but huge changes are coming.
I don't think anyone has really wrestled with the implications of that yet - we've started talking about "deskilling" and "congnitive debt" but mostly in the context of "programmers are going to forget how to structure code - how to use the syntax of their languages, etc et etc)." I'm not worried about that as it's the same sort of thing we've seen for decades - compilers, higher-order languages, better abstracts, etc etc etc.
The fact that LLMs are able to wrestle with essential complexity means that using them is going to push us further and further from the actual problems we're trying to solve. Right now, it's the wrestling with problems that helps us understand what those problems are. As our organizations adopt LLMs that are able to take on _those_ problems - that is, customer problems, not problems of data, scaling, and so forth - will we hit a brick wall where we lose that understanding? Where we keep shipping stuff but it gets further and further from what our customers need? How do we avoid that?
The premise is that the software development had been mostly "essential complexity" rather than "accidental complexity." But I think anyone who worked as SE in the past decade would have found the opposite is true.
> AI is the silver bullet - my output is genuinely 10X what it was before claude code existed.
Those are not the same.
You can add 5 different features to a project and still provide less value that the 5 lines diff that resolves a performance bottleneck.
For the human makers of things, the incompletenesses and inconsistencies of our ideas become clear only during implementation.
Conceptual integrity is the most important consideration in system design.
There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude improvement in productivity.
---
These ideas still apply very well to modern society. but, Personally, I hope science advances to the point where nine women really can have a baby in parallel.
We may need that to prevent demographic collapse and keep the pension system from running out of money.
“Open the refrigerator door, HAL”
“I can’t do that right now”
Vibe coded software is the Marvel green screen movie equivalent.
Fred Brooks wrote that book when they were programming IBM operating systems in assembly language.
Times have really, really changed - do not pay attention to the messages of this book unless for historical fun.
That book isn't, it's built from humility and a rare bright light in this god forsaken field.
Martin Fowler, the author of the blog, may be a bit different than that.