> I want to be fair to Cal.com here, because I don’t think they’re acting in bad faith. I just think the security argument is a convenient frame for decisions that are actually about something else. […] Framing a business decision as a security imperative does a disservice to the open-source ecosystem that helped Cal.com get to where they are.
That sure sounds like bad faith to me.
> Large parts of it are delivered straight into the user’s browser on every request: JavaScript, …
Ooh, now I want to try convincing people to return from JS-heavy single-page apps to multi-page apps using normal HTML forms and minimal JS only to enhance what already works without it—in the name of security.
(C’mon, let a bloke dream.)
This article raises a lot of good points that strengthen the argument against keeping models away just because they're "too powerful". I remain disappointed to see AI corporations gloating about how powerful their private models are that they're not going to provide to anyone except a special whitelist. That's more likely to give attackers a way in without any possibility for defense, not the other way around.
> Open source creates a useful urgency: when your code is public, you assume it will be examined closely, so you invest earlier and more aggressively in finding and fixing issues before attackers do.
This should be the mentality of every company doing open source.Great points made.