Yeah, I like this framing a lot. There comes a point, after working on a system for a while, when there are no details: every aspect of how the system works is understood to be in some way significant. If one of those details is changed, you understand what the implications of that change will be for the rest of the system, its users, etc. I worry that in a post-AI software world, that’ll never happen. The system will be so full of code you’ve barely looked at, understanding it all will be hopeless. If a change is proving impossible to make without introducing bugs, it will be more sensible to AI-build a new system than understand the problem.
I sometimes wonder if modularity will become even more important (as it has in physical construction, e.g. with the move from artisanal, temperamental plaster to cheap, efficient drywall), so that systems that AI is not able to reliably modify can easily be replaced.