'censorship' may be too strong a word, but there is something unprecedented about this. AI tools are supposed to be general-purpose and able to assist with all sorts of tasks. It's expected that they are restricted when it comes to "unsafe" content like illegal or nsfw information and activities. However, this is the first time, to my knowledge, that an AI tool has been restricted from assisting with something that's perceived as a threat to the AI company.
> this is the first time, to my knowledge, that an AI tool has been restricted from assisting with something that's perceived as a threat to the AI company
You think so? I was under the impression that all the model providers have been trying to prevent use of their models to train competitor models for a while now.