At this stage, AI is no longer a tool that enhances your ability to ship code, it has replaced you entirely in that role. You don't control what is shipped, and you can't verify if it's correct. That's a serious problem! As software engineers, we remain accountable for code we no longer fully understand.
Then, what comes next feels less like a new software practice and more like a new religion, where trust has to replaces understanding, and the code is no longer ours to question.
Or formal methods and other tools for verifying the code security?
Speak for yourself, I don't ship any code that I don't fully understand. Yes that requires less autonomous AI and less frequent merging. But I don't even want to think about the disasters that could happen if you really get into the habit of shipping code you can't verify or understand.