I think it depends on your experience. I have a lot of experience from the Old Days™ and from developing for microcontrollers, so I find reading assembly very natural and straightforward. When coding for the really small MCUs I've often had the disassembly generated and shown on another window every time I incrementally build, and can check and make sure it's what I was expecting to see.
I do agree that knowledge of compiler optimizations is really important to working this way, though you'll eventually pick them up anyway. I don't see much value in looking at -O0 or -Og disassembly. You want the strongest stuff the compiler can generate if you're going to do this, which is usually either -O3 or -Oz (both of which are strong in their own ways). -O0 disassembly is... just so much pain for so little gain. Besides, -O3 breaks more stuff anyway!
For someone without this level of experience (and who isn't interested in learning)... yeah, I can see why you'd want to do this another way. But if you've got the experience already, it's plenty fast enough.
The thing is, you're gaining a bunch of knowledge about compiler internals and optimisations, but those aren't necessarily specified or preserved, so it's questionable how valuable that experience actually is. The next release of the compiler might rewrite the optimiser, or introduce a new pass, and so your knowledge goes out of date. And even if you have perfect knowledge of the optimiser and can write code that's UB according to the standard but will be optimised correctly by this specific compiler... would that actually be a good idea?
All of that is less true in the microcontroller world where compilers change more slowly and your product will likely be locked to a specific compiler version for its entire lifecycle anyway (and certainly you don't have to worry about end users compiling with a different compiler). In that case maybe getting deeply involved in your compiler's internals makes more sense.