Of course they're not going to get worse. That would be absurd. The rate of progress will slow down though.
I don't think it would be absurd for them to worsen. If LLMs cause discourse to worsen, but also grow and change, then the trainers are in a conundrum of ignoring new training data or losing track of the zeitgeist.
They might get worse for 2 reasons:
1. AI free training sets no longer exist. This might degrade quality, although some claim that it will not.
2. Cost. Right now they are burning a lot of money to convince people it's good. But they might not be able to keep it up forever and need to increase prices (which few will want to pay) or degrade the quality to save money.
> Of course they're not going to get worse. That would be absurd. The rate of progress will slow down though.
It's unlikely, but not totally improbable - Model collapse means that the subsequent models would get worse over time, not better.