The right use of AI requires stellar leadership, and to be honest, I don't think that kind of leadership exists. I am using AI just for myself, and the traps and pitfalls I encounter are so many. For example, I generate an article on a topic, and while this is very useful to get started, I then have to go through every sentence because AI makes some overconfident statements that are just not true in this form. This is still very helpful, because then I have to think about why they are not true. But I don't see how that can ever scale, how would I know that colleagues are also diligent like this?
AI is incredible in three scenarios: a) what I just described, to get you started, b) to generate artifacts that can be rigorously checked (and I don't mean tests, I mean proofs), c) where your artifacts don't have a meaningful notion of correctness, like a work of art.
c) is a matter of taste, b) certainly scales, but a) is where I think trust will be essential, and I am not ready to trust anyone with that except myself.
Oh, and I think currently, c) is applied to software engineering, by people who cannot distinguish the engineering from the art part of software. Which is just funny right now, and will eventually be catastrophic.