> I was made redundant recently "due to AI" (questionable) and it feels like my works in some way contributed to my redundancy where my works contributed to the profits made by these AI megacorps while I am left a victim.
This is increasingly common, and I don’t think it’s questionable that LLMs that software engineers help train are contributing to the obsolescence of software engineers. Large companies that operate these LLMs both 1) benefit from the huge amount of open-source software and at the same time 2) erode the very foundation that made open-source software explode in popularity (which happened thanks to copyright—or, more precisely, the ability to use copyright to enforce copyleft and thus protect the future of volunteer work made by individual contributors).
GPL was written long before this technology started to be used this way. There’s little doubt that the spirit of GPL is violated at scale by commercial LLM operators, and considering the amount of money that got sunk into this it’s very unlikely they would ever yield to the public the models, the ability to mass-scrape the entire Internet to train equivalent models, the capability to run these models to obtain comparable results, etc. The claim of “democratising knowledge” is disingenuous if you look deeper into it—somehow, they themselves will always be exempt from that democratisation and free to profit from our work, whereas our work is what gets “democratised”. Somehow, this strikes me personally more as expropriation than democratisation.