We run agent teams (Navigator/Driver/Reviewer roles) on a 71K-line codebase. The trust problem is solved by not trusting the agents at all. You enforce externally. Python gates that block task completion until tests pass, acceptance criteria are verified, and architecture limits are met. The agents can't bypass enforcement mechanisms they can't touch. It's not about better prompts or more capable models. It's about infrastructure that makes "going off the rails" structurally impossible.