While I agree with this intuitively, I also just can't get past the argument that people said the same thing when we switched from everyone using ASM to C/Fortran etc.
There is a massive difference in outright transformation of something you created yourself vs a collage of snippets + some sauce based on stuff you did not write yourself. If all you did to use your AI was to train it exclusively on your own work product create during your lifetime I would have absolutely no problem with it, in fact in that case I would love to see copyright extended to the author.
But in the present case the authorship is just removed by shredding the library and then piecing back together the sentences. The fact that under some circumstances AIs will happily reproduce code that was in the training data is proof positive they are to some degree lossy compressors. The more generic something is ("for (i=0;i<MAXVAL;i++) {") the lower the claim for copyright protection. But higher level constructs past a couple of lines that are unique in the training set that are reproduced in the output modulo some name changes and/or language changes should count as automatic transformation (and hence infringing or creating a derivative work).
>can't get past the argument that people said the same thing when we switched from everyone using ASM to C/Fortran
that's a bad comparison for two reasons. One is that C is a transparent language that requires understanding of its underlying mechanics. Using C doesn't absolve you from understanding lower concepts and was never treated as such. The power of C comes squarely with a warning label that this is a double edged sword.
Secondly insofar as people have used higher level languages as a replacement for understanding and introduced a "everyone can code now" mentality the criticism has been validated. What we've gotten, long before AI tooling, were shoddy, slow, insecure tower-of-babel like crappy codebases that were awful for the exact same reason these newest practices are awful.
Introducing new technology must never be an excuse for ignorance, the more powerful the tool the greater the knowledge required of the user. You don't hand the most potent dangerous weapon to the least competent soldier.
The HLL-to-LLM switch is fundamentally different to the assembler-to-HLL switch. With HLLs, there is a transparent homomorphism between the input program and the instructions executed by the CPU. We exploit this property to write programs in HLLs with precision and awareness of what, exactly, is going on, even if we occasionally do sometimes have to drop to ASM because all abstractions are leaky. The relation between an LLM prompt and the instructions actually executed is neither transparent nor a homomorphism. It's not an abstraction in the same sense that an HLL implementation is. It requires a fundamental shift in thinking. This is why I say "stop thinking like a programmer and start thinking like a business person" when people have trouble coding with LLMs. You have to be a whole lot more people-oriented and worry less about all the technical details, because trying to prompt an LLM with anywhere near the precision of using an HLL is just an exercise in frustration. But if you focus on the big picture, the need that you want your program to fill, LLMs can be a tremendous force multiplier in terms of getting you there.
The study compares ChatGPT use, search engine use, and no tool use.
The issues with moving from ASM to C/Fortran are different from using LLMs.
LLMs are automation, and general purpose automation at that. The Ironies of Automation came out in the 1980s, and we’ve known there are issues. Like Vigilance decrement that comes when you switch from operating a system to monitoring a system for rare errors.
On top of that, previous systems were largely deterministic, you didn’t have to worry that the instrumentation was going to invent new numbers on the dial.
So now automation will go from flight decks and assembly lines, to mom and pop stores. Regular to non-deterministic.
> "I also just can't get past the argument that people said the same thing when we switched from everyone using ASM to C/Fortran etc."
There was no "switch"; the transition took literally decades. Assembler and high level languages co-existed in the mainstream all the way until the 1990s because it was well understood that there was a trade off getting the best performance using assembler (e.g. DOOM's renderer in 1993) and ease of development and portability (something that really mattered when there were a dozen different CPU architectures around) using high level languages.
There is no need to get past the argument because it doesn't exist. Nobody said that.