The elephant in the room there is that if you allow AI contributions you immediately have a licensing issue: AI content can not be copyrighted and so the rights can not be transferred to the project. At any point in the future someone could sue your project because it turned out the AI had access to code that was copyrighted and you are now on the hook for the damages.
Open source projects should not accept AI contributions without guidance from some copyright legal eagle to make sure they don't accidentally exposed themselves to risk.
> AI content can not be copyrighted and so the rights can not be transferred to the project. At any point in the future someone could sue your project because it turned out the AI had access to code that was copyrighted and you are now on the hook for the damages.
Not quite. Since it has copyright being machine created, there are no rights to transfer, anyone can use it, it's public domain.
However, since it was an LLM, yes, there's a decent chance it might be plagiarized and you could be sued for that.
The problem isn't that it can't transfer rights, it's that it can't offer any legal protection.
Sorry, this doesn't make sense to me.
Any human contributor can also plagiarize closed source code they have access to. And they cannot "transfer" said code to an open source project as they do not own it. So it's not clear what "elephant in the room" you are highlighting that is unique to A.I. The copyrightability isn't the issue as an open source project can never obtain copyright of plagiarized code regardless of whether the person who contributed it is human or an A.I.
AI code by itself cannot be protected. However the stitching together of AI output and curation of outputs creates a copyright claim.
You may indeed have a licensing issue... but how is that going to be enforced? Given the shear amount of AI generated code coming down the pipes, how?
> At any point in the future someone could sue your project because it turned out the AI had access to code that was copyrighted and you are now on the hook for the damages.
So it is said, but that'd be obvious legal insanity (i.e. hitting accept on a random PR making you legally liable for damages). I'm not a lawyer, but short of a criminal conspiracy to exfiltrate private code under the cover of the LLM, it seems obvious to me that the only person liable in a situation like that is the person responsible for publishing the AI PR. The "agent" isn't a thing, it's just someone's code.
You might find that the AI accepts that as a valid reason for rejecting the PR.
Well, after today's incidents I decided that none of my personal output will be public. I'll still license them appropriately, but I'll not even announce their existence anymore.
I was doing this for fun, and sharing with the hopes that someone would find them useful, but sorry. The well is poisoned now, and I don't my outputs to be part of that well, because anything put out with well intentions is turned into more poison for future generations.
I'm tearing the banners down, closing the doors off. Mine is a private workshop from now on. Maybe people will get some binaries, in the future, but no sauce for anyone, anymore.