I think there's a useful distinction nobody here is making: there's a difference between using AI as a writing tool and using AI as a thinking tool.
Most people in this thread are talking about the output stage. You know: polish my text, fix my grammar, generate my message. That's where you lose your voice. But the blank page problem borski describes isn't really a writing problem, it's a thinking problem. Once you know what you want to say, saying it tends to be the easy part for us writers (sometimes lol!).
The most useful thing I've found is using AI to figure out what I actually think, using it for rubber ducking, exploring angles, stress-testing arguments, and then closing the tab and writing it myself. You get the cognitive help without losing the (or your) soul. I've output more writing in my own genuine voice in the last year than I did in several years prior, and it's because I use AI for clarity instead of replacing my output.
I do agree on that take. I find AI to be most useful as a sparing partner for my thought process. I also agree with the other commenter that it, of course, can also influence your thought process. We have to stay aware of that and try to stay in control of that conversation.
But what if your rubber duck is actually steering your thought process (since you may not have a consolidated one)? In this way I think the AI as editor is far better than a rubber duck AI. While in the former, it might point out your mistakes and give useful advices (which is similar to what you describe), it might not steer your thought (unless your mistakes are far too severe!), and actually help in your reasoning. But AI as a brainstorming rubber duck (or thinking tool) could be harmful to your thought process.