Interesting argument for AI ethics in general. It takes the form of "guns don't kill people - people kill people".
I don't think any side on the issue of gun ownership has ever claimed that statement is false, so I'm not sure what your point is.
An argument that I have some sympathy for, while still being moderately+ in favor of gun control (here in the USA where I'm a citizen).
It seems that gun control—though imperfect—in regions that have implemented it has had a good bit of success and the legitimate/non-harmful capabilities lost seem worth it to me in trade for the gains. (Reasonable people can disagree here!)
Whereas it seems to me that if we accept the proposition that the vast majority of code in the future is going to be written by AI (and I do), these valuable projects that are taking hard-line stances against it are going to find themselves either having to retreat from that position or facing insurmountable difficulties in staying relevant while holding to their stance.
Unfortunately ChatGPT turned “text continuation” into “separate entity you can talk to”