Using an LLM to ponder responses for requests is way too costly and slow. Much easier to just use the shotgun approach and fire off a lot of requests and deal with whatever bothers to respond.
This btw is nothing new. Way back when I still used wordpress, it was quite common to see your server logs filling up with bots trying to access endpoints for commonly compromised php thingies. Probably still a thing but I don't spend a lot of time looking at logs. If you run a public server, dealing with maliciously intended but relatively harmless requests like that is just what you have to do. Stuff like that is as old as running stuff on public ports is.
And the offending parties writing sloppy code that barely works is also nothing new.
AI opportunism certainly has added a bit of opportunistic bot and scraper traffic but it doesn't actually change the basic threat model in any fundamental way. Previously version control servers were relatively low value things to scrape. But code just became interesting for LLMs to train on.
Anyway, having any kind of thing responding on any port just invites opportunistic attempts to poke around. Anything that can be abused for DOS purposes might get abused for exactly that. If you don't like that, don't run stuff on public servers or protect them properly. Yes this is annoying and not necessarily easy. Cloud based services exist that take some of that pain away.
Logs filling up with 404, 401, or 400 responses should not kill your server. You might want to implement some logic that tells repeat offenders 429 (too many requests). A bit heavy handed but why not. But if you are going to run something that could be used to DOS your server, don't be surprised if somebody does that.