This is exactly what happens when there's no accountability infrastructure for AI agents.
Google's response is to restrict access — a blunt instrument that punishes legitimate users because they have no way to verify which agents were behaving correctly and which weren't.
The real fix isn't restrictions, but cryptographic behavioral commitments — agents declare what they'll do before execution, and any third party can verify compliance after. We don't need gatekeepers. We need verification.
I've been building this: https://github.com/agbusiness195/NOBULEX