You take responsibility. That means if the AI messes up, you get punished. No pushing blame onto the stupid computer. If you're not comfortable with that, don't use the AI.
> That means if the AI messes up
I'm not talking about maintainability or reliability. I'm talking about legal culpability.
If they merge it in despite it having the model version in the commit, then they're arguably taking a position on it too - that it's fine to use code from an AI that was trained like that.
There’s no reasonable way for you to use AI generated code and guarantee it doesn’t infringe.
The whole use it but if it behaves as expected, it’s your fault is a ridiculous stance.