logoalt Hacker News

testing22321yesterday at 3:15 AM2 repliesview on HN

> This does nothing to shield Linux from responsibility for infringing code.

It’s no worse than non-AI assisted code.

I could easily copy-paste proprietary code, sign my name that it’s not and that it complies with the GPL and submit it.

At the end of the day, it just comes down to a lying human.


Replies

sarchertechyesterday at 6:36 PM

That’s the difference. In practice a human has to commit fraud to do this.

But a human just using an LLM to generate code will do it accidentally. The difference is that regurgitation of training text is a documented failure mode of LLMs.

And there’s no way for the human using it to be aware it’s happening.

show 1 reply
LtWorfyesterday at 9:56 PM

Yes but if you do that manually you are in bad faith, if you ask an AI to do it you have no idea if you are going to be liable of something or not.