Web crawlers didn’t routinely take down public resources or use the scraped info to generate facsimiles that people are still having ethical debates over. Its presence didn’t even register and it was indexing that helped them. It isn’t remotely the same thing.
https://www.libraryjournal.com/story/ai-bots-swarm-library-c...
AI bots must've taken down that link you shared, it won't load :/
And search crawlers/results have been producing snippets that prevent users from clicking to the source for well over a decade.
Edit: it loaded. I don't see how the problem isn't simply solved by an off the shelf solution like cloud flare. In the real world, you wouldn't open up a space/location if you couldn't handle the throughput. Why should online spaces/locations get special treatment?