None of what you said shows how it's an issue, beyond "it just is." Doctors for example "plagiarize" all the time, copying standardized diagnostic protocols, clinical notes from previous visits, and peer-reviewed treatment plans. The risk is in the information actually being wrong rather than them having "original" expression (which might even be worse, where they try some "novel" treatment and end up killing the patient). There is no fraud involved as the effects of plagiarism which is, again, a completely fictional issue.
I am also not sure why you keep bringing up Altman et al, I really don't give a shit what they are talking about, that is not what I am discussing. You for some reason keep trying to inject your views on these people when they are not relevant to the points I made which are about the theoretical concepts of machine learning and training, and its intersection with intellectual property. I am not interested in your opinions on these people, and they are not the only ones who stand to benefit from democratization of AI models and publishing of weights for the public.
Anyway, I think we both fundamentally have different views on the freedom of information and the fallacious nature of IP that cannot be changed online so I will bid you a good day and won't continue this conversation further, as I don't think it's productive for either of us.