Here's an experiment that might be worth trying: temporarily delete a source file, ask your coding agent to regenerate it, and examine the diffs to see what it did differently.
This could be a good way to learn how robust your tests are, and also what accidental complexity could be removed by doing a rewrite. But I doubt that the results would be so good that you could ask a coding agent to regenerate the source code all the time, like we do for compilers and object code.
I just had Claude rewrite a simple utility as far code, but complex if you didn’t know the gotchas of a particular AWS Service. It was much better than my implementation and it already knew how things work underneath,
For context, my initial implementation went through the official AWS open source process (no longer there) five years ago and I’m still getting occasional emails and LinkedIn Messages because it’s one of the best ways to solve the problem that is publicly available - the last couple of times, I basically gave the person the instructions I gave ChatGPT (since I couldn’t give them the code) and told them to have it regenerate the code in Python and it would do much better than what I wrote when I didn’t know the service as well as I do now, and the service has more features that you have to be concerned about