Very reasonable stance. I see reviewing and accepting a PR is a question of trust - you trust the submitter to have done the most he can for the PR to be correct and useful.
Something might be required now as some people might think that just asking an LLM is "the most he can done", but it's not about using AI it's about being aware and responsible about using it.
> I see reviewing and accepting a PR is a question of trust
I think that's backwards, at least as far as accepting a PR. Better that all code is reviewed as if it is probably a carefully thought out Trojan horse from a dedicated enemy until proven otherwise.
I think framing it as a trust question is exactly right
That's the key part in all this. Reviewing PR needs to be a rock solid process that can catch errors. Human or AI generated.
Important though we generally assume few bad actors.
But like the XZ attack, we kind of have to assume that advanced perissitant threats are a reality for FOSS too.
I can envisage a Sybil attack where several seemingly disaparate contributors are actually one actor building a backdoor.
Right now we have a disparity in that many contributors can use LLMs but the recieving projects aren't able to review them as effectively with LLMs.
LLM generated content often (perhaps by definition) seems acceptable to LLMs. This is the critical issue.
If we had means of effectively assessing PRs objectively that would make this moot.
I wonder if those is a whole new class of issue. Is judging a PR harder than making one? It seems so right now