Who even uses their open source product?
Security through obscurity isn't a great strategy.
There are endless closed calendar options. Cal.com being FOSS and not making us feel locked in forever was the only reason we chose it over wasting limited cycles self hosting this at Distrust and Caution.
AI can clone something like cal.com with or without source code access, so in trying to pointlessly defend against AI they are just ruining the trust they built with their customers, which is the one thing AI can never create out of thin air.
We exclusively run our companies with FOSS software we can audit or change at any time because we work in security research so every tool we choose is -our- responsibility.
They ruined their one and only market differentiator.
We will now be swapping to self hosting ASAP and canceling our subscriptions.
Really disappointing.
Meanwhile at Distrust and Caution we will continue to open source every line of code we write, because our goal is building trust with our customers and users.
Security by obscurity has never been real.
AI sure is useful as a scapegoat for any negative PR inducing moves.
I guess why fix vulnerabilities when you can just obscure them?
Saaspocalypse is coming for cal.com
TIL I learned about yet another calendar application I don't need. Someone should setup their openclaw to just write a new todo/calendar app each week; they'll be billionaires by the end of the year.
Sounds backwards to me.
Enshittification has come for VC backed open-source. AI has deemed commercial open source obsolete especially when users can point Calude Code to calcom on GitHub and ask it to make them scheduling features directly into their product. That’s what spooked Cal.
Monumentally dumb given their codebase is already public and the type of security issues that exist in software are usually found in the oldest code. But also, and more importantly, cal.com launched coss.com last year, open source is (ostensibly) their DNA. How could they do a complete 180 on something so fundamental and think that wouldn’t worry customers, much more so than their codebase being public? I cannot even begin to understand this. Surely there must be more to the story?
I have fond memories of this project. Contributing to it really helped me ramp up my dev skills and was effectively my introduction to monorepo’s in JavaScript. It was the kind of codebase I couldn’t get my hands on while working in my part of the world. Good luck going closed source.
I hate how this sounds...but this reads to me "we lack the confidence in our code security so we're closing the source code to conceal vulnerabilities which may exist."
- You know, Lindsay, as a software engineering consultant, I have advised a number of companies to explore closing their source, where the codebase remains largely unchanged but secure through obscurity.
- Well, did it work for those companies?
- No, it never does. I mean, these companies somehow delude themselves into thinking it might, but... but it might work for us.
This is some truly exceptionally clownish attention seeking nonsense. The rationale here is complete nonsense, they just wanted to put "because AI" after announcing their completely self-serving decision. If AI cyber offense is such a concern, recognize your role as a company handling truckloads of highly sensitive information and actually fix your security culture instead of just obscuring it.
Risk tolerance and emotional capacity differs from one individual to another, while I may disagree with the decision I am able to respect the decision.
That said, I think it’s important to try and recognize where things are from multiple angles rather than bucket things from your filter bubble alone, fear sells and we need to stop buying into it.
Chatgpt, write me a reason to make more money as a tech ceo.
Charge for api access, take a cut of the extensions economy.
How do i do that, I'm open source?
Today, AI can be pointed at an open source codebase and systematically scan it for vulnerabilities.
AI also goes a long way towards erasing the distinction between source code and executable code. The disassembly skill of a good LLM is nothing short of jaw-dropping.
So going closed-source may be safer for SaaS, but closing the source won't save a codebase from being exploited if the binaries are still accessible to the public. In that sense, instead of dooming SaaS as many people have suggested AI will do, it may instead be a boon.
This seems dishonest, like someone is forcing the decision for other reasons, and they're using security and AI as a distraction.
Security via obscurity and you get to blame AI too! What a win for their marketing team.
Good for them. I’m sure they saw the writing on the wall when Monday.com was cloned. This is the right move.
This is the future now that AI is here. Publishing is going to be dead, look at the tea leaves, how many engineers are claiming they don’t use package managers anymore and just generate dependencies? 5 years and no one will be making an argument for open source or blogging.
Security through obscurity has been known to be a faulty approach for nearly 200 years. Yet here we are.
Seems like it's just being used as a convenient pretense to back out of open-source.
Security by obscurity. Good luck. So novice.
LOL. Every generation has to learn anew that security through obscurity is no security at all.
This has to be the most bullshit reason I've seen.. if AI can be pointed and find vulnerabilities then do it yourself before publishing the code.
[dead]
[dead]
[dead]
You know what?
Great move.
Open-source supporters don't have a sustainable answer to the fact that AI models can easily find N-day vulnerabilities extremely quickly and swamp maintainers with issues and bug-reports left hanging for days.
Unfortunately, this is where it is going and the open-source software supporters did not for-see the downsides of open source maintenance in the age of AI especially for businesses with "open-core" products.
Might as well close-source them to slow the attackers (with LLMs) down. Even SQLite has closed-sourced their tests which is another good idea.
Could you not simply point AI at your open source codebase and use it to red-team your own codebase?
This post's argument seems circular to me.