logoalt Hacker News

Humanely dealing with humungus crawlers

78 pointsby freediveryesterday at 5:06 PM43 commentsview on HN

Comments

bobbiechenyesterday at 6:02 PM

>We’ve already done the work to render the page, and we’re trying to shed load, so why would I want to increase load by generating challenges and verifying responses? It annoys me when I click a seemingly popular blog post and immediately get challenged, when I’m 99.9% certain that somebody else clicked it two seconds before me. Why isn’t it in cache? We must have different objectives in what we’re trying to accomplish. Or who we’re trying to irritate.

+1000 I feel like so much bot detection (and fraud prevention against human actors, too) is so emotionally-driven. Some people hate these things so much, they're willing to cut off their nose to spite their face.

show 3 replies
nektroyesterday at 7:30 PM

it's sad we've gotten to the point where mitigations against this have to be such a consideration when hosting a site

show 1 reply
hyperman1yesterday at 8:51 PM

I've been wondering about how to make a challenge that AI won't do. Some possibilities:

* Type this sentence, taken from a famous copyrighted work.

* Type Tienanmen protests.

* Type this list of swear words or sexual organs.

show 3 replies
michaeljxyesterday at 9:28 PM

For some reason I thought this would be about dealing with very large insects

show 2 replies
kragentoday at 12:38 AM

This is exciting!

nickpsecurityyesterday at 11:53 PM

I made my pages static HTML with no images, used a fast server, and BunnyCDN (see profile domain). Ten thousand hits a day from bots costs a penny or something. When I'm using images, I link to image hosting sites. It might get more challenging if I try to squeeze meme images in between every other paragraph to make my sites more beautiful.

Far as Ted's article, the first thing that popped in my head is that most AI crawlers hitting my sites are in big, datacenter cities: Dallas, Dublin, etc. I wonder if I could easily geo-block those cities or redirect them to pages with more checks built-in. I just haven't looked into that on my CDN's or in general in a long time.

They also usually request files from popular, PHP frameworks and othrr things like that. If you don't use PHP, you could autoban on the first request for a PHP page. Likewise for anything else you don't need.

Of the two, looking for .php is probably lightening quick with low, CPU/RAM utilization in comparison.

zkmonyesterday at 7:01 PM

[flagged]

show 2 replies
kiitosyesterday at 10:05 PM

what a just totally bizarre perspective

all of the stuff that's being complained-about is absolute 100% table-stakes stuff that every http server on the public internet has needed to deal with since, man, i dunno, minimum 15 years now?

as a result literally nobody self-hosts their own HTTP content any more, unless they enjoy the challenge in like a problem-solving sense

if you are even self-hosting some kind of captcha system you've already make a mistake, but apparently this guy is not just hosting but building a bespoke one? which is like, my dude, respect, but this is light years off of the beaten path

the author whinges about google not doing their own internal rate limiting in some presumed distributed system, before any node in that system makes any http request over the open internet. that's fair and not doing so is maybe bad user behavior but on the open internet it's the responsibility of the server to protect itself as it needs to, it's not the other way around

everything this dude is yelling about is immediately solved by hosting thru a hosting provider, like everyone else does, and has done, since like 2005

show 1 reply