I'm of course assuming the legal status quo holds, where code properly generated by LLM is also explicitly public domain. No shadiness involved.
(There's always a risk of an LLM copying something verbatim by accident, but if the designers are doing their job that chance gets low enough to be acceptable. Human code has that risk too after all. (And for situations that aren't an accident, with the human intentionally using snippets to draw out training text, then if they submit that code in a patch it's just a human violating copyright with extra steps.))
> code properly generated by LLM is also explicitly public domain
Where? I hadn't heard of any such ruling.