Isn't the boilerplate that "AI" is capable of generating becoming more and more dated with each passing day?
Are the AI firms capable of retraining their models to understand new features in the technologies we work with? Or are LLMs going to be stuck generating C.A. 2022 boilerplate forever?
It seems like they should be able to “overweight” newer training data. But the risk is the newer training data is going to skew more towards AI slop than older training data.
I mean if people continue checking open source code into GitHub using those new features then they should be able to learn them just fine.
No to the first question, and maybe with a lot of money for the second question.