"regurgitate human intelligence from training data" - exactly. And the tricky part is when your actual code contradicts that training data.
Model saw thousands of examples of "how to implement X". When your codebase does X differently, training data wins. You can see it happen: point out the conflict, and a model that's reasoning would shift gears - ask questions, acknowledge tension. But model in retrieval mode just reiterates. Same confidence, same explanation, maybe rephrased.
That's why "I completely understand this time" keeps happening in AI responses. From model's view, nothing to check - the pattern it retrieved already "makes sense."
In short, if you're not doing something completely new, something that AI will almost certainly do better than you, then you're safe. Otherwise, you'll have to put in a considerable amount of effort to get the AI to cooperate, or you'll have to do the most difficult tasks yourself, simply because the limitations described above haven't been addressed.