logoalt Hacker News

droobyyesterday at 6:21 PM2 repliesview on HN

That's not snarky at all, that's exactly the point. They did get it both ways.

The comment I was responding to argued that ownership of non-physical things is basically a "polite lie" and that information is just entropy that belongs to whoever can capture it. My point was that the AI companies clearly don't believe that when it applies to them. They patent their architectures, copyright their outputs, sue competitors for IP violations, and lock down their model weights. They fully believe in ownership of non-physical things.

But when it comes to the billions of people whose work they trained on? Suddenly information is free-flowing entropy that belongs to no one.

That's the asymmetry at the heart of this. The rules around IP apparently apply when it protects their profits, but not when it would obligate them to share those profits with the people whose work made them possible. Which is exactly why the public needs to assert a claim now, before that asymmetry gets any more entrenched.


Replies

droobyyesterday at 6:26 PM

Addition:

Also worth knowing: collective intellectual property already exists. ASCAP and BMI have been doing exactly this for decades. Individual songwriters can't enforce their rights every time their music gets played, so they pool their IP, license it collectively, and distribute the revenue. The problem they solved is almost identical to the training data problem. Each individual contribution is tiny, but the collective value is enormous. Applying this at the scale of the general public would be novel, but the underlying mechanism isn't. The concept works. It just hasn't been applied to training data yet.

pixl97yesterday at 8:05 PM

I mean, the AI companies want it this way, but the same laws of information apply to them too. They can patent whatever they want, but as we see other nations use their models to distill information to other models with almost nothing they can do about it.

Patents, copyright, lawsuits are all post ad hoc actions which mean the milk has already been stolen. And it only works if the rule of law is something that is respected, that's not going so well lately.

We are seeing this in that there is little to no moat between the models, nearly everyone with the needed compute seems to catch up pretty quickly. And when said rivalries cross national boarders the only solution to these problems quickly becomes violence.

With how information works AI wins this game in the long run. Individual humans scale poorly and their ability to individually acquire information is a slow process. Looking at this on a company by company basis is not the proper way to show how the future with models is going to play out.