It seems that author unironically advises to write your commit messages like this: "Restructured Claude’s module architecture, rejected initial state management approach, rewrote error handling from scratch", to have a chance at defense in potential court hearing. I find it funny, if vindicating for my personal approach. If the expectation is to "restructure, reject, rewrite" what "AI" spits out, why use "AI" at all at this point???
i do, all of it. sorry
I'm still flabbergasted that people – and big, visible companies with big targets on their backs – choose to keep on using the output of LLMs without having an answer to these questions.
And I'm worried that once that has been sufficiently normalized, laws and interpretations of them will adapt to whatever best suits those users. Which will mean copyrightwashing of FOSS. My only hope then is that surely if free software can be copyright-washed by the big guys, then so can the little guy copyright-wash the big guys' blockbuster movies or whatever, which might lead to some sort of reckoning.
The idea that the provenance of a given tool's code inherently pollutes the material it's used with seems kind of illogical. Wouldn't it follow from this premise that any code written using open source IDEs and debugged with open source debuggers and other tooling would itself then be considered copyleft? Are works written with LibreOffice not copyrightable?
There's obviously a huge issue with the legitimacy and ownership of training data being fed to LLMs. That seems like an issue between the owners of that IP and the people training the models and selling them as services more than the people using the tool. Isn't this just another flavor of SCO trying to extort money out of companies using Linux?
the entire US economy rides on AI. no ruling throwing a wrench into the multi trillion engine is ever going to be permitted to happen
IMO this is the greatest argument against AI as technofascism. The general public seems to believe that AI will usher in technofascism by claiming corporate ownership of AI output: the independent entrepreneur will be unable to compete against the corporations compute, every piece of data about you will be stolen and monetized by AI, and you will own nothing.
But AI might in fact do the exact opposite and reverse the privatization trend that the West has been going through for the last 400 years. All of our copyright laws rely on the idea that there is a human consciousness behind the copyright. The more AI has input, the less we can claim ownership. If AI returns everything to the commons, then it results in a much more egalitarian world.
Hilariously, many people, especially artists, see the return of the commons as an assault against them. They’re so captured by copyright that they assume any infringement on their copyright is inherently fascist. It’s ridiculous. Copyright is a corporations number 1 weapon when it comes to creating a moat and keeping the masses out.
The original intent of copyright, in fact, was an incentive to return an idea to the commons. Experts used to hide their discoveries in order to keep them for themselves. Copyright provided an opportunity to release this knowledge and still profit. There were even several cases where it was established that those who claimed copyright could retain copyright even if the idea had been previously discovered. This created a huge incentive: release the knowledge or risk having your process copyrighted by the opposition. But that system worked because copyright could only exist for so long (14 years, doubled if they filed again.)
Now copyright is a lifelong sentence at almost 100 years. The entire purpose of it has been undermined. Corporations own all your childhood and by the time you can profit off of it, it’s outdated.
A world where the mainstream is primarily a commons seems to me like an egalitarian world. I’d like to live in that world.
Copyright has a lot to do with what we as a society want to protect and encourage. We want to protect an author that put the hours into creating a book, as opposed to the person creating a copy of that work. The person copying can claim they put in work too but the claim is not strong enough to override our preference to protect original authors.
Part of the problem with generated works is that it is lower effort like the person copying something. It’s not an activity that demands special protection like original authorship. I believe this is a large part of the reasoning.
Whoever pays for the tokens.
What if no meaningful thought was put into the code (entirely vibe-coded slop), but it’s made for your employer? Shouldn’t the work be uncopyrightable?
I do. I used a tool to create it. I own the things I create.
Anything else is just bullshit equivocation.
LLMs are just tools we use. If I program an app in C++, do I not own the rights to the executable because my compiler wrote machine code for me?
There is no such thing as ownership of a pattern of information. It has been an illusion, and that illusion is now fading.
It’s the same as photography. No photographer built the multibillion dollar supply chain for the optics train in a camera, nor did they build the city scape they are enjoying as a background, they simply set the stage and push a button.
I have a wood cutting machine and some wood. Who owns the timber?
yo Mama
-Claude
Could you please stop posting generated comments to HN? It's not allowed here, and it looks like you've done it over 30 times already.
(Of course, there's no way to be certain of this, but it's what our software thinks, and the overall pattern is pretty convincing.)
See https://news.ycombinator.com/newsguidelines.html#generated and https://news.ycombinator.com/item?id=47340079
Ask chatgpt deep research citing court cases and it shows dark factory swe code are not copyrightable under current precedents.
Even steering it with prompts isn't enough. The guy couldn't copyright the image he made with ai, code is no different.
Maybe prompts written by humans are copyrightable.
Can't wait for the Billionaires to entrench in court they can steal everything for these machines and claim it as their own and maybe even reach for anything that it helps produce. Fuck that
[flagged]
The EU AI Act adds another layer to this: Article 50 requires that AI-generated or AI-manipulated content be disclosed to end users. That is a transparency obligation on the deployer, not the model provider.
So even if the copyright question resolves in favour of the human prompter, companies shipping AI-assisted output at scale have a separate compliance obligation to disclose it. Most are not doing this.
The ownership question and the disclosure question are independent, but organisations tend to conflate them.
[flagged]
[flagged]
[dead]
[dead]
[flagged]
[dead]
That was a rather unhelpful TL;DR.