From my experience they are motivated by these two issues that you run into when using Claude Code (or similar tool):
1. The LLM is operating on more what you'd call "guidelines" than the rules -- it will mostly make a PR after fixing a bug, but sometimes not. It will mostly run tests after completing a fix, but sometimes not. So there's a sentiment "heck, let's write some prompt that tells it to always run tests after fixing code", etc.
2. You end up running the LLM tool against state that is in GitHub (or RCS du jour). E.g. I open a bug (issue) and type what I found that's wrong, or whatever new feature I want. Then I tell Claude to go look at issue #xx. It runs in the terminal, asks me a bunch of unnecessary permission questions, fixes the bug, then perhaps makes a PR, perhaps I have to ask for that, then I go watch CI status on the PR, come back to the terminal and tell it that CI passed so please merge (or I can ask it to watch CI and review status and merge when ready). After a while you realize that all that process could just be driven from the GitHub UI -- if there was a "have Claude work on this issue" button. No need for the terminal.
After a while many people then realize this often produces worse results by injecting additional noise in context like the overhead of invoking the gh cli and parsing json comments or worse the mcp.
But they get the dopamine loop of keeping the loop alive, flashing colors, high score/token use, and plausible looking outputs — so its easy to deceive oneself into thinking something remarkable was discovered