I have seen people suggest that the problem is that LLMs let you express any of your ideas, but the number of people with ideas worth expressing is limited.
In a sense I think this is accurate, but not inevitable. I think there is a lack of creative thinking, but it has come from a world that doesn't value it and suppresses difference.
There is a brilliant line in Treehouse of Horrors IV where Principle Skinner says "Now I've gotten word that a child is using his imagination, and I've come to put a stop to it." Which is just the perfect comment on the modern education system.
Models trained on the lack of diversity will push one way, but I think it will also avenues for expression that didnb't exist before. The balance will come from how we react and support what we would like to have happen
I think it has more to do with LLM's being statistical models than human creativity lacking in the input. The creativity and millions of voices and tones may be there, but since these models tend to go for the most likely next words, polishing this away becomes a feature.
A text by a human mind may be seen as a jagged crystal with rough edges and character. Maybe not perfectly written but it's special.
An LLM takes a million of crystals and trims the most likely tokens to be chosen into what would rather appear as a smooth pebble; the common core of all crystals. And everyone using the LLM will get very similar pebbles because to the LLM, regardless who is speaking to it, it will provide the same most likely next tokens. It's not that creativity is lacking in the input, but the LLM picks the most commonly chosen words by all humans in given contexts.
For that to sound imaginative and great as you go, it would have to not only exist in the data, but be a common dominating voice among humans. But if it was, it wouldn't be seen as creative because it would be the new normal.
So I'm not sure how there's a good way out of this. You could push LLM temperature high so that it becomes more "creative" by picking less popular tokens as it writes, but this instead tend to make it unpredictable and picking words it shouldn't have. I mean, we are still dealing with statistical models here rather than brains and it's a rough tool for that job.