Precisely. “AI” contributions should be seen as an extension of the individual. If anything, they could ask that the account belong to a person and not be a second bot only account. Basically, a person’s own reputation should be on the line.
> Precisely. “AI” contributions should be seen as an extension of the individual.
That's an OK view to hold, but I'll point out two things. First, it's not how the tech is usually wielded to interact with open-source software. Second, your worldview is at odds with the owners of this technology: the main reason why so much money is being poured into AI coding is that it's seen by investors as a replacement for the individual.
Interesting argument for AI ethics in general. It takes the form of "guns don't kill people - people kill people".
Reputation isn't very relevant here. Yes, for established well known FOSS developers, their reputation will tank if they put out sloppy PRs and people will just ignore them.
But the projects aren't drowning under PRs from reputable people. They're drowning in drive-by PRs from people with no reputation to speak of. Even if you outright ban their account, they'll just spin up a new one and try again.
Blocking AI submissions serves as a heuristic to reduce this flood of PRs, because the alternative is to ban submissions from people without reputation, and that'd be very harmful to open source.
And AI cannot be the solution here, because open source projects have no funds. Asking maintainers to fork over $200/month for "AI code reviews" just kills the project.