If I look around in the FLOSS communities, I see a lot of skepticism towards LLMs. The main concerns are:
1. they were trained on FLOSS repositories without consent of the authors, including GPL and AGPL repos
2. the best models are proprietary
3. folks making low-effort contribution attempts using AI (PRs, security reports, etc).
I agree those are legitimate problems but LLMs are the new reality, they are not going to go away. Much more powerful lobbies than the OSS ones are losing fights against the LLM companies (the big copyright holders in media).
But while companies can use LLMs to build replacements for GPL licensed code (where those LLMs have that GPL code probably in their training set), the reverse thing can also be done: one can break monopolies open using LLMs, and build so much open source software using LLMs.
In the end, the GPL is only a means to an end.
Free software has never mattered more.
All the infrastructure that runs the whole AI-over-the-internet juggernaut is essentially all open source.
Heck, even Claude Code would be far less useful without grep, diff, git, head, etc., etc., etc. And one can easily see a day where something like a local sort Claude Code talking to Open Weight and Open Source models is the core dev tool.
> Why does this matter? Because the “open source” rebrand wasn’t just a marketing change — it was a philosophical amputation.
I cringe whenever I see such an AI generated sentence and unfortunately it devalues the article
I’m not so sure… what I see as more likely is that coding agents will just strip parts from open source libraries to build bespoke applications for users. Users will be ecstatic because they get exactly what they want and they don’t have to worry about upstream supply chain attacks. Maintainers get screwed because no one contributes back to the main code base. In the end open source software becomes critical to the ecosystem, but gets none of the credit.
It’s such a fun time to have 1+ decade(s) of experience in software. Knowing what simple and good are (for me), and being able to articulate it has let me create so much personal software for myself and my family. It has really felt like turning ideas into reality, about as fast as I can think of them or they can suggest them. And adding specific features, just for our needs. The latest one was a slack canvas replacement, as we moved from slack to self-hosted matrix + element but missed the multiplayer, persistent monthly notes file we used. Even getting matrix set up in the first place was a breeze.
$20/month with your provider of choice unlocks a lot.
Edit: the underlying point being, yes to the article. Either building upon the foundations of open source to making personal things, or just modifying a fork for my own needs.
FOSS is dead - long live, FOSS.
FOSS came up around the core idea of liberating software for hardware, and later on was sustained by the idea of a commodity of commons we can build on. But with LLMs we have alternative pathways/enablement for the freedoms:
Freedom 0 (Run): LLMs troubleshoot environments and guide installations, making software executable for anyone.
Freedom 1 (Study/Change): make modifications, including lowering bar of technical knowledge.
Freedom 2 (Redistribute): LLMs force redistribution by building specs and reimplementing if needed.
Freedom 3 (Improve/Distribute): Everyone gets the improvement they want.
As we can see LLM makes these freedoms more democratic, beyond pure technical capability.
For those that cared only about these 4 freedoms, LLMs enable these in spades. But those who looked additionally for business, signalling and community values of free software (I include myself in this), these were not guaranteed by FOSS, and we find ourselves figuring out how to make up for these losses.
Coding agents and LLMs basically tivoize open source.
When AI will eventually become the primary means to write code (because hand-programming is going to be slow enough that no company can continue like before) then that means AI becomes your new compiler that comes with a price tag, a subscription.
Programmers were held hostage to commercial compilers until free compilers reached sufficient level of quality, but now it doesn't matter if your disk is full of free/open toolchains if it's not you who is commanding them but commercial AI agents.
Undoubtedly there will be open-source LLMs eventually, of various levels of quality. But to write a free compiler you need a laptop while to train a free programming LLM you need a lot of money. And you also need money to run it.
Programming has been one of the rare arts that even a poor, lower class kid can learn on his own with an old, cheap computer salvaged from a scrap bin and he can raise himself enough intellectual capital to become a well-paid programmer later. I wonder what the equivalent path will be in the future.
Good piece, but two things work against the thesis:
The Sunsama example actually argues the opposite direction. He spent an afternoon hacking around a closed system with an agent and it worked. If agents are good enough to reverse-engineer and workaround proprietary software today, the urgency to switch to open source decreases, not increases. "Good enough" workarounds are how SaaS stays sticky.
And agents don't eliminate the trust problem, they move it. Today you trust Sunsama with your workflows. In this vision, you trust your agent to correctly interpret your intent, modify code safely, and not introduce security holes. Non-technical users can't audit agent-modified code any better than they could audit the original source. You've traded one black box for another.
This is an interesting take. I think a critical missing piece from this article is how the use of coding agents will essentially enable the circumvention of copyleft licenses. Some project that was recently posted on HN is already selling this service [1][2]. It rewrites code/modules/projects to less restrictive licenses with no legal enforcement mechanisms. It's the opposite of freeing code.
[1] Malus.sh ; Initially a joke but, in the end, not. You can actually pay for their service.
[2] Your new code is delivered under the MalusCorp-0 License—a proprietary-friendly license with zero attribution requirements, zero copyleft, and zero obligations.
The point this article makes, that suddenly agents can do the work of customizing free software, completely makes sense. But, the reality is that the Free Software movement is opposed to the way Lemons are built today, and would not accept a world like this. (Rightfully!)
My belief is that Lemons effectively kill open source in the long run, and generally speaking, people forget that Free Software is even a thing. The reasoning for that is simple: it’s too easy to produce a “clean” derivative with just the parts you need. Lemons do much better with a fully Lemoned codebase than they do with a hybrid. Incentives to “rewrite” also free people from “licensing burdens” while the law is fuzzy.
Open source has never been more alive for me. I have been publishing low key for years, and AI has expanded that capability more than 100 fold, in all directions. I had previously published packages in multiple languages but recently started to cut back to just one manually. But now with AI, I started to expand languages again. Instead of feeling constrained by toolchains I feel comfortable with, I feel freedom to publish more and more.
The benefits to publishing AI generated code as open source are immense including code hosting and CI/CD pipelines for build, test, lint, security scans, etc. In additional to CI/CD pipelines, my repos have commits authored by Claude, Dependabot, GitHub Advanced Security Bot, Copilot, etc. All of this makes the code more reliable and maintainable, for both human and AI authored code.
Some thoughts on two recent posts:
1. 90% of Claude-linked output going to GitHub repos w <2 stars (https://news.ycombinator.com/item?id=47521157): I'm generally too busy to publishing code to promote, but at some time it might settle down. Additionally, with how fast AI can generate and refactor code, it can take some time before the code is stable enough to promote.
2. So where are all the AI apps? (https://news.ycombinator.com/item?id=47503006): They are in GitHub with <2 stars! They are there but without promotion it takes a while to get started in popularity. That being said, I'm starting to get some PRs.
5 years ago, I set out to build an open-source, interoperable marketplace powered by open-source SaaS. It felt like a pipe dream, but AI has made the dream into fruition. People are underestimating how much AI is a threat to rent seeking middlemen in every industry.
I think it will wall people off from software.
I don’t know what SaaS has to do with FOSS. The point of FOSS was to allow me to modify the software I run on my system. If the device drivers for some hardware I depend on are no longer supported by the company I bought it from, if it’s open source, I can modify and extend the software myself.
The Copy Left licenses ensure that I share my modifications back if I distribute them. It’s a thing for the public good.
Agent-based software development walls people off from that. Mostly by ensuring that the provenance of the code it generates is not known and by deskilling people so that they don’t know what to prompt or how to fix their code.
One thing I keep noticing is that agents are getting better at implementation faster than they’re getting better at judgment.
They can often wire up a library or scaffold a migration, but they’re still pretty shaky at the “should we choose this at all?” layer — pricing cliffs, version floors, lock-in, EOLs, migration blockers, etc.
If coding agents do end up making free software more useful again, I think part of that will come from making open docs / changelogs / migration guides more usable at decision time, not just at implementation time.
> SaaS scaled by exploiting a licensing loophole that let vendors avoid sharing their modifications.
AI is going to exploit even more: "Given the repository -> Construct tech spec -> Build project based on tech spec"
At this stage, I want everyone just close their source, stop working on open source until this issue of licensing gets resolved.
Any improvement you make to the open source code will be leveraged in ways you didn't intend it to be used, eventually making you redundant in the process
A question I have for those doing Agentic coding - what is the development process used? How are agents organised?
Top down with a "manager" agent telling "coding" agents what to do? I.e. mirroring the existing corporate interpretation of "agile"/scrum development.
I was thinking and seeing the title of this article, it would be interesting to setup a agent environment that mirrors a typical open source project involving a discussion forum (where features are thrown around) and a github issue/PR (where implementation details are discussed) and then have a set of agents that are "mergers" - acting as final review instances.
I assume that agents can be organised in any form at all, it's just a matter of setting up the system prompt and then letting them go for it. A discourse forum could be set up where agents track the feature requests of users of the software and then discuss how to implement it or how to workaround it.
The reason I ask is because one could then do a direct comparison of development processes, i.e. the open source model versus the corporate top-down process. It would interest me to see which process performance better in terms of maintainability, quality and feature richness.
“Their relationship with the software is one of pure dependency, and when the software doesn’t do what they need, they just… live with it”
Or, more likely, they churn off the product.
The SaaS platforms that will survive are busy RIGHT NOW revamping their APIs, implementing oauth, and generally reorganizing their products to be discovered and manipulated by agents. Failing in this effort will ultimately result in the demise of any given platform. This goes for larger SaaS companies, too, it’ll just take longer.
What's the chance this website is powered by postgresql?
agree completely. When the megacorps are building hundreds of datacenters and openly talking about plans to charge for software "like a utility," there has never been a clearer mandate for the need for FOSS, and IMO there has never been as much momentum behind it either.
these are exciting times, that are coming despite any pessimism rooted in our out-dated software paradigms.
Maybe, but I don't really believe users can or want to start designing software, if it was even possible which today it isn't really unless you already have software dev skills.
That would basically make users a product manager and UX designer, which they aren't really capable of currently. At most they will discover what they think they want isn't what they actually want.
Let's wait until there are high-quality free software agents before we leap to this conclusion. That may be many years.
i've learned a lot from open source, and i'm building open source myself. so individually? yeah this stings a bit. but zooming out to the ecosystem level — i think there's still something genuinely positive happening here. the knowledge compounds, even if the credit doesn't.
(luckily my projects are unpopular enough that nobody bothered training on them lol)
This article misses the point completely. Open source isn't great because it's easy to extract value from it. Open source is great because of the people creating value with it.
Value isn't just slapping a license on something and pushing to GitHub. It's maintaining and curating that software over years, focusing the development towards a goal. It's as much telling users what features you're not willing to add and maintain as it is extending the project to interoperate with others.
And that long term commitment to maintenance hasn't come out of the vibe coded ecosystem. Commitment is exactly what they don't want, rather they want the fast sugar high before they drop it and move on to the next thing.
The biggest threat to open source is the strip mining of the entire ecosystem, destroying communities and practices that have made it thrive for decades. In the past, open source didn't win because it always had the best implementation, but because it was good enough to solve problems for enough people that it became self sustaining from the contribution of value.
I feel that will continue, but it's also going to take a set back from those that aren't interested in contributing value back into the ecosystem from which they have extracted so much.
First of all, free software still matters. Then, being a slave to a $200 subscription to a oligarch application that launders other people's copyright is not what Stallman envisioned.
The AI propaganda articles are getting more devious my the minute. It's not just propaganda---it's Bernays-level manipulation!
is someone building an agent to manage self hosted infra? A lot of "convenience" issues around self hosting free software would go away.
I worry people are lacking context about how SaaS products are purchased if they think LLMs and "vibe coding" are going to replace them. It's almost never the feature set. Often it's capex vs opex budgeting (i.e., it's easier to get approval for a monthly cost than a upfront capital cost) but the biggest one is liability.
Companies buy these contracts for support and to have a throat to choke if things go wrong. It doesn't matter how much you pay your AI vendor, if you use their product to "vibe code" a SaaS replacement and it fails in some way and you lose a bunch of money/time/customers/reputation/whatever, then that's on you.
This is as much a political consideration as a financial one. If you're a C-suite and you let your staff make something (LLM generated or not) and it gets compromised then you're the one who signed off on the risky project and it's your ass on the line. If you buy a big established SaaS, do your compliance due-diligence (SOC2, ISO27001, etc.), and they get compromised then you were just following best practice. Coding agents don't change this.
The truth is that the people making the choice about what to buy or build are usually not the people using the end result. If someone down the food chain had to spend a bunch of time with "brittle hacks" to make their workflow work, they're not going to care at all. All they want is the minimum possible to meet whatever the requirement is, that isn't going to come back to bite them later.
SaaS isn't about software, it's about shifting blame.
They could, but they won't.
I wonder if there will be a different phenomena — namely everyone just developing their own personal version of what they want rather than relying on what someone else built. Nowadays, if the core functionality is straightforward enough, I find that I just end up building it myself so I can tailor it to my exact needs. It takes less time than trying to understand and adapt someone else’s code base, especially if it’s (mostly) AI generated and contains a great deal of code slop.
Correct headline, wrong reasons. Free software will continue to be the software that nerds write and share openly because they just like doing that.
Some kind of artisan "proper" quality work, compared to cheap enterprise AI slop.
The article makes zero sense to me.
It compares and contrasts open source and free software, and then gives an example of how free software is better than closed software.
But if the premise of the article, that the agent will take the package you pick and adapt it to your needs, is correct, then honestly the agent won't give a rat's ass whether the starting point was free source or open source.
Oh yeah, sure, nothing scream freedom louder than following anthropic and openai suggestions without a second thought.
right. because free software stopped mattering. what an asshole headline
The debate in the comment section here really boils down to: upstream freedom vs downstream freedom.
Copyleft licenses like GPL/Apache mandate upstream freedom: Upstream has the "freedom" to use anything downstream, including anything written by a corporation.
Non-copyleft FOSS licenses like MIT/BSD are about downstream freedom, which is more of a philosophically utilitarian view, where anyone who receives the software is free to use it however they want, including not giving their changes back to the community, on the assumption that this maximizes the utility of this free software in the world.
If you prioritize the former goal, then coding agents are a huge problem for you. If the latter, then coding agents are the best thing ever, because they give everyone access to an effectively unlimited amount of cheap code.
I think the opposite. It will make all software matter less.
If trendlines continue... It will be faster for AI to vibe code said software to your customized specifications than to sign up for a SaaS and learn it.
"Claude, create a project management tool that simplifies jira, customize it to my workflow."
So a lot of apps will actually become closed source personalized builds.
tl-didn't finish but I absolutely do this already. Much of the software I use is foss and codex adjusts it to my needs. Sometimes it's really good software and I end up adding something that already exists. Whatever, tokens are free...
> a free software license alone does not empower users to be truly free if they lack the expertise to exercise those freedoms
This is a bullshit argument, and I'm surprised that people aware enough of these issues would try to push it.
Closed (or online-only) software prevents not only the end user from modifying it, but also 'unlicensed' hackers that the end user can ask for help.
See the "right to repair" movement as a very close example. The possibility of an 'ecosystem' of middlemen like these, matters !
If most of the "free software" is AI slop, then it's going to make me read a lot more source code for free software, if the free software is also open-source. If it isn't open-source, oh boy, no way.
AI backdoors are already a well known problem, and vibe-coded free software is always going to present a substantial risk. We'll see how it plays out in time, but I can already see where it's heading.
After enough problems, reputation and humans in the loop could finally become important again. But I have a feeling humanity is going to have to learn the hard way first (again).
>agents don’t leave
I think Pete Hegseth would disagree with this statement.
Unfortunately for me, I believe that the algorithms won't allow me to get exposure for my work no matter how good it is so there is literally no benefit for me to do open source. Though I would love to, I'm not in a position to work for free. Exposure is required to monetize open source. It has to reach a certain scale of adoption.
The worst part is building something open source, getting positive feedback, helping a couple of startups and then some big corporation comes along and implements a similar product and then everyone gets forced by their bosses to use the corporate product against their will and people eventually forget your product exists because there are no high-paying jobs allowing people to use it.
With hindsight, Open Source is basically a con for corporations to get free labor. When you make software free for everyone, really you're just making it free for corporations to Embrace, Extend, Extinguish... They invest a huge amount of effort to suppress the sources of the ideas.
Our entire system is heavily optimized for decoupling products from their makers. We have almost no idea who is making any of the products we buy. I believe there is a reason for that. Open source is no different.
When we lived in caves, everyone in the tribe knew who caught the fish or who speared the buffalo. They would rightly get credit. Now, it's like; because none of the rich people are doing any useful work, they can only maintain credibility by obfuscating the source of the products we buy. They do nothing but control stuff. Controlling stuff does not add value. Once a process is organized, additional control only serves to destroy value through rent extraction.
[flagged]
[dead]
[dead]
[dead]
[dead]
[dead]
Having over a decade of open source software I've written freely available online, I actually really appreciate the value that AI && LLMs have provided me.
The thing that leaves a bad taste in my mouth is the fact that my works were likely included in the training data and, if it doesn't violate my licenses (GNU 2/3), it certainly feels against the spirit of what I intended when distributing my works.
I was made redundant recently "due to AI" (questionable) and it feels like my works in some way contributed to my redundancy where my works contributed to the profits made by these AI megacorps while I am left a victim.
I wish I could be provided a dividend or royalty, however small, for my contribution to these LLMs but that will never happen.
I've been looking for a copy-left "source available" license that allows me to distribute code openly but has a clause that says "if you would like to use these sources to train an LLM, please contact me and we'll work something out". I haven't yet found that.
I'm guessing that such a license would not be enforceable because I am not in the US, but at least it would be nice to declare my intent and who knows what the future looks like.