if it catches a lot of bugs maybe you’d be better of letting it write it in the first place :)
This is definitely not correct in my opinion. You’re essentially saying, instead of a person actually getting better at the craft, just give up and let someone else do it.
Nono, that is the reverse centaur. Structure your own thoughts, that's the human work.
IME, not really. When you prompt it to review its own written code, it will end up finding out a bunch of stuff that should have been otherwise. And then you can add different "dimensions" in your prompt as well like performance, memory safety, idiomatic code, etc.
It also writes lots of bugs which it'll catch some of, in an independent review chat.
This is bogus. If you think LLMs write less buggy software, you haven't worked with seriously capable engineers. And now, of course, everyone can become such an engineer if they put in the effort to learn.
But why not just use the AI? Because you can still use the AI once you're seriously good.