The great promise and the great disaster of LLMs is that for any topic on which we are "below average", the bland, average output seems to be a great improvement.
Counter intuitively... this is a disaster.
We dont need more average stuff - below average output serves as a proxy for one to direct their resources towards producing output of higher-value.
Counter intuitively... this is a disaster.
We dont need more average stuff - below average output serves as a proxy for one to direct their resources towards producing output of higher-value.