The real threat is not security but bad actors copying your code and calling it theirs.
IMHO, open source will continue to exist and it will be successful but the existence of AI is deterrent for most. Lets be honest, in recent times the only reason startups went open source first was to build a community and build organic growth engine powered by early adaptors. Now this is no longer viable and in fact it is simply helping competitors. So why do it then?
The only open source that will remain will be the real open source projects that are true to the ethos.
I agree with you that AI's disruption of attribution is a much bigger problem, but it's also worth recognizing that not everyone has this same motivation. It mostly affects copyleft open source licenses.
Attribution isn't required for permissive many open source licenses. Dependencies with those licenses will oftentimes end up inside closed source software. Even if there isn't FOSS in the closed-source software, basically everyone's threat model includes (or should include) "OpenSSL CVE". On that basis, I doubt Cal is accomplishing as much as they hope to by going closed source.
> The real threat is not security but bad actors copying your code and calling it theirs.
How has this changed?
If you copy the code infringing licenses, yes, it will be harder to legally sort things out.
Otherwise, copying code and improving it with AI or with humans is the same, as long as the product improves.
I doubt that many semi-automatic AI copies can really improve a product more than the original team, for really valid products.
AI will be a filter of bad quality.