Using gitea does not help if you goal is to allow non-auth'ed read-only access to the repo from a web browser. The scrapers use that to hit up every individual commit, over and over and over.
We used nginx config to prevent access to individual commits, while still leaving the "rest" of what gitea makes available read-only for non-auth'ed access unaffected.
Yuk…
http {
# ... other http settings
limit_req_zone $binary_remote_addr zone=mylimit:10m rate=10r/s;
# ...
}
server {
# ... other server settings
location / {
limit_req zone=mylimit burst=20 nodelay;
# ... proxy_pass or other location-specific settings
}
}
Rate limit read-only access at the very least. I know this is a hard problem for open source projects that have relied on web access like this for a while. Anubis?
Every commit. Every diff between 2 different commits. Every diff with different query parameters. Git blame for each line of each commit.
Imagine a task to enumerate every possible read-only command you could make against a Git repo, and then imagine a farm of scrapers running exactly one of them per IP address.
Ugh.