What exactly is the Platonic Representation Hypothesis?
You just don't "learn reality" by getting good at representations. You can learn a data set. You can learn a statistical regularity in things such as human languages. You can analyze the concept spaces of LLM's and compare them numerically. I agree with that.
What the hell does "learning an objective shared reality" mean?
This reminds me of EY saying that a solomonoff inductor would learn all of physics in a few days of a 1920x1080 data stream. Either it's false (because it needs to do empirical testing itself), or it's true, but only if you presuppose the idea that it has a perfect model of all the interactions of the world and can decide between all theories a priori... so then why are we even asking if it's a "perfect learner"? It already has a model for all possible interactions already, there's nothing out of distribution. You might argue, "Well, which model is the correct one?" That's the wrong question already - empirical data is often about learning what you didn't know that you didn't know, not just learning about in-distribution unknowns.
I just get an ick because I associate people talking about this hypothesis as if "LLM's converge on shared objective reality => they are super smart and objective, unlike humans". LLM's can be smart. They can even be smarter than humans. It's also true that empiricism is king.