Even writing code the good old way, of course we experiment. I remember the old rule "Plan to throw away the first one. You will anyway." But then there's the "second system effect" where the second system is supposedly always overengineered and trying to take every possibility into account.
And then there's the times when the quick sloppy poc you planned to throw away gets forced into production and is still impossible to change ten years down the road.
AI makes all these problems so much less painful.
I worked at a company which had a huge monolithic ERP system (their product, to be clear) with no good separation between the GUI layer and presentation layer. The GUI was also dependent on an ancient version of the Borland C++ compiler. They put in a humongous effort to move to a slightly more modern UI library, and a client server architecture.
However, someone had decided that messages in xml or json were too inefficient, they already had performance issues. So they went with a binary message protocol of their own design - with no features for protocol update. Everything communicating with the server had to be on exactly the same version, or it would throw an error. So of course they very, very rarely updated the protocol.
I think the best help of AI will be to clean up such real life messes of soul-crushing architectural regrets. Will it do it perfectly, certainly not, but I wouldn't do it perfectly myself either if I was forced to do it - and I'd take a hell of a lot more time to do it.