I remember reading a CF blog post about crawler separation and responsible AI bot principles where they argue every bot should have one distinct purpose. Now they're building crawling infrastructure themselves, and their own /crawl endpoint lists "training AI systems" as a use case alongside regular crawling. So not only are they in the crawling business now, they're not following the separation principle. To be fair, there's a business logic here. But it's hard not to notice the irony. https://blog.cloudflare.com/uk-google-ai-crawler-policy/