Concerns about the wasting of maintainer’s time, onboarding, or copyright, are of great interest to me from a policy perspective. But I find some of the debate around the quality of AI contributions to be odd.
Quality should always be the responsibility of the person submitting changes. Whether a person used LLMs should not be a large concern if someone is acting in good-faith. If they submitted bad code, having used AI is not a valid excuse.
Policies restricting AI-use might hurt good contributors while bad contributors ignore the restrictions. That said, restrictions for non-quality reasons, like copyright concerns, might still make sense.
The real invariant is responsibility: if you submit a patch, you own it. You should understand it, be able to defend the design choices, and maintain it if needed
It should be the responsibility of the person submitting changes. The problem is AI apparently makes it easy for people to shirk that responsibility.
> If they submitted bad code...
The core issue is that it takes a large amount of effort to even assess this, because LLM generated code looks good superficially.
It is said that static FP languages make it hard to implement something if you don't really understand what you are implementing. Dynamically typed languages makes it easier to implement something when you don't fully understand what you are implementing.
LLMs takes this to another level when it enables one to implement something with zero understanding of what they are implementing.