You should see my reply to convolvatron below.
I don't think this is a correct formulation of the platonic representation argument:
different models converge on similar representations because they are exposed to the same reality
because that would be true for any statistical system based on real data. I am sure the platonic representation argument is saying something more interesting than that. I believe they are arguing against people like me, who say that LLMs are entirely surface correlations of human symbolic representation of ideas, and not actually capable of understanding the underlying ideas. In particular humans can speak about things chimpanzees cannot speak about, but that we both understand (chimps understand "2 + 2 = 4" - not the human sentence, but the idea that if you have a pair of pairs on one hand, and a quadruplet on the other, you can uniquely match each item between the collections). Humans and chimps both seem to have some understanding of the underlying "platonic reality," whatever that means.
"Not actually capable of understanding" is worthless unfalsifiable garbage, in my eyes. Philosophy at its absolute worst rather than science.
Trying to drag an operational definition of "actual understanding" out of anyone doing this song and dance might as well be pulling teeth. People were trying to make the case for decades, and there's still no ActualUnderstandingBench to actually measure things with.