idk, all the code i've seen produced by an llm doesn't appear to be derived from anything. Also, the source code they were trained on does not exist in the model, it's impossible for the llm to return a code snippet from some other code base. The code snippet doesn't exist in the model in the first place. I guess another way to put it is show your code in the output of an llm that isn't being attributed correctly.
At least on GitHub there was a special flag to exclude code that matches publicly available source code. Thus the chance is higher than 0. Which matches my experience last year when multiple Copilot chats got redacted for that reason.