logoalt Hacker News

NitpickLawyertoday at 9:43 AM8 repliesview on HN

Project maintainers will always have the right to decide how to maintain their projects, and "owe" nothing to no one.

That being said, to outright ban a technology in 2026 on pure "vibes" is not something I'd say is reasonable. Others have already commented that it's likely unenforceable, but I'd also say it's unreasonable for the sake of utility. It leaves stuff on the table in a time where they really shouldn't. Things like documentation tracking, regression tracking, security, feature parity, etc. can all be enhanced with carefully orchestrated assistance. To simply ban this is ... a choice, I guess. But it's not reasonable, in my book. It's like saying we won't use ci/cd, because it's automated stuff, we're purely manual here.

I think a lot of projects will find ways to adapt. Create good guidelines, help the community to use the best tools for the best tasks, and use automation wherever it makes sense.

At the end of the day slop is slop. You can always refuse to even look at something if you don't like the presentation. Or if the code is a mess. Or if it doesn't follow conventions. Or if a PR is +203323 lines, and so on. But attaching "LLMs aka AI" to the reasoning only invites drama, if anything it makes the effort of distinguishing good content from good looking content even harder, and so on. In the long run it won't be viable. If there's a good way to optimise a piece of code, it won't matter where that optimisation came from, as long as it can be proved it's good.

tl;dr; focus on better verification instead of better identification; prove that a change is good instead of focusing where it came from; test, learn and adapt. Dogma was never good.


Replies

rswailtoday at 1:05 PM

The entire basis of the OSS is licensing.

Licensing is dependent on IPR, primarily copyright.

It is very unclear whether the output of an AI tool is subject to copyright.

So if someone uses AI to refactor some code, that refactored code isn't considered a derivative work which means that the refactored source is no longer covered by the copyright, or the license that depends on that.

ptnpzwqdtoday at 10:07 AM

At the moment verification at scale is an unsolved problem, though. As mentioned, I think this will act as a rough filter for now, but probably not work forever - and denying contributions from non-vetted contributors will likely end up being the new default.

Once outside contributions are rejected by default, the maintainers can of course choose whether or not to use LLMs or not.

I do think that it is a misconception that OSS software needs to "viable". OSS maintainers can have many motivations to build something, and just shipping a product might not be at the top of that list at all, and they certainly don't have that obligation. Personally, I use OSS as a way to build and design software with a level of gold plating that is not possible in most work settings, for the feeling that _I_ built something, and the pure joy of coding - using LLMs to write code would work directly against those goals. Whether LLMs are essential in more competitive environments is also something that there are mixed opinions on, but in those cases being dogmatic is certainly more risky.

bandramitoday at 12:57 PM

Until the copyright questions surrounding LLM output is solved it's not "vibes" to reject them but simply "legal caution".

mapcarstoday at 9:53 AM

> Or if the code is a mess. Or if it doesn't follow conventions.

In my experience these things are very easily fixable by ai, I just ask it to follow the patterns found and conventions used in the code and it does that pretty well.

show 1 reply
mathwtoday at 9:55 AM

Your analogy with CI/CD is flawed because while not all were convinced of the merits of CI/CD, it's also not technology built on vast energy use and copyright violation at a scale unseen in all of history, which has upended the hardware market, shaken the idea of job security for developers to its very foundation and done it while offering no really obvious benefits to groups wishing to produce really solid software. Maybe that comes eventually, but not at this level of maturity.

But you're right it's probably unenforceable. They will probably end up accepting PRs which were written with LLM assistance, but if they do it will be because it's well-written code that the contributor can explain in a way that doesn't sound to the maintainers like an LLM is answering their questions. And maybe at that point the community as a whole would have less to worry about - if we're still assuming that we're not setting ourselves up for horrible licence violation problems in the future when it turns out an LLM spat out something verbatim from a GPLed project.

ckolkeytoday at 11:31 AM

owing "nothing to no one" means you are allowed to be unreasonable...

surgical_firetoday at 10:37 AM

> That being said, to outright ban a technology in 2026 on pure "vibes" is not something I'd say is reasonable.

To outright accept LLM contributions would be as much "pure vibes" as banning it.

The thing is, those that maintain open source projects have to make a decision where they want to spend their time. It's open source, they are not being paid for it, they should and will decide what it acceptable and what is not.

If you dislike it, you are free to fork it and make a "LLM's welcome" fork. If, as you imply, the LLM contributions are invaluable, your fork should eventually become the better choice.

Or you can complain to the void that open source maintainers don't want to deal with low effort vibe coded bullshit PRs.

show 2 replies
keyboredtoday at 11:29 AM

> That being said, to outright ban a technology in 2026 on pure "vibes" is not something I'd say is reasonable.

The response to a large enough amount of data is always vibes. You cannot analyze it all so you offload it to your intuition.

> It leaves stuff on the table in a time where they really shouldn't. Things like documentation tracking, regression tracking, security, feature parity, etc. can all be enhanced with carefully orchestrated assistance.

What’s stopping the maintainers themselves from doing just that? Nothing.

Producing it through their own pipeline means they don’t have to guess at the intentions of someone else.

Maintainers just doing it themselves is just the logical conclusion. Why go through the process of vetting the contribution of some random person who says that they’ve used AI “a little” to check if it was maybe really 90%, whether they have ulterior motives... just do it yourself.