>This is a deranged and factually and tautologically (definitionally) false claim.
Strong words for a weak argument. LLMs are trained on data generated by physical processes (keystrokes, sensors, cameras), not telepathically extracted "mental models." The text itself is the artifact of reality and not just a description of someone's internal state. If a sensor records the temperature and writes it to a log, is the log a "model of a model"? No, it’s a data trace of a physical reality.
>All this removal and all these intermediate representational steps make LLMs a priori obviously even more distant from reality than humans.
You're conflating mediation with distance. A photograph is "mediated" but can capture details invisible to human perception. Your eye mediates photons through biochemical cascades-equally "removed" from raw reality. Proximity isn't measured by steps in a causal chain.
>The model humans use is embodied, not the textbook summaries - LLMs only see the diminished form
You need to stop thinking that a textbook is a "corruption" of some pristine embodied understanding. Most human physics knowledge also comes from text, equations, and symbolic manipulation - not direct embodied experience with quantum fields. A physicist's understanding of QED is symbolic, not embodied. You've never felt a quark.
The "embodied" vs "symbolic" distinction doesn't privilege human learning the way you think. Most abstract human knowledge is also mediated through symbols.
>It's not clear LLMs learn to actually do physics - they just learn to write about it
This is testable and falsifiable - and increasingly falsified. LLMs:
Solve novel physics problems they've never seen
Debug code implementing physical simulations
Derive equations using valid mathematical reasoning
Make predictions that match experimental results
If they "only learn to write about physics," they shouldn't succeed at these tasks. The fact that they do suggests they've internalized the functional relationships, not just surface-level imitation.
>They can't run labs or interpret experiments like humans
Somewhat true. It's possible but they're not very good at it - but irrelevant to whether they learn physics models. A paralyzed theoretical physicist who's never run a lab still understands physics. The ability to physically manipulate equipment is orthogonal to understanding the mathematical structure of physical law. You're conflating "understanding physics" with "having a body that can do experimental physics" - those aren't the same thing.
>humans actually verify that the things they learn and say are correct and provide effects, and update models accordingly. They do this by trying behaviours consistent with the learned model, and seeing how reality (other people, the physical world) responds (in degree and kind). LLMs have no conception of correctness or truth (not in any of the loss functions), and are trained and then done.
Gradient descent is literally "trying behaviors consistent with the learned model and seeing how reality responds."
The model makes predictions
The Data provides feedback (the actual next token)
The model updates based on prediction error
This repeats billions of times
That's exactly the verify-update loop you describe for humans. The loss function explicitly encodes "correctness" as prediction accuracy against real data.
>No serious researcher thinks LLMs are the way to AGI... accepted by people in the field
Appeal to authority, also overstated. Plenty of researchers do think so and claiming consensus for your position is just false. LeCunn has been on that train for years so he's not an example of a change of heart. So far, nothing has actually come out of it. Even META isn't using V-JEPA to actually do anything, nevermind anyone else. Call me when these constructions actually best transformers.
Okay I suspected, but now it is clear @famouswaffles is an AI / LLM poster. Meaning they are an AI or primarily using AI to generate posts.
"You're conflating", random totally-psychotic mention of "Gradient descent", way too many other intuitive stylistic giveaways. All transparently low-quality midwit AI slop. Anyone who has used ChatGPT 5.2 with basic or extended thinking will recognize the style of the response above.
This kind of LLM usage seems relevant to someone like @dang, but also I can't prove that the posts I am interacting with are LLM-generated, so, I also feel it isn't worthy of report. Not sure what is right / best to do here.
>>> LLMs aren't modeling "humans modeling the world" - they're modeling patterns in data that reflect the world directly.
>>This is a deranged and factually and tautologically (definitionally) false claim.
>Strong words for a weak argument. LLMs are trained on data generated by physical processes (keystrokes, sensors, cameras), not telepathically extracted "mental models." The text itself is the artifact of reality and not just a description of someone's internal state. If a sensor records the temperature and writes it to a log, is the log a "model of a model"? No, it’s a data trace of a physical reality.
I don't know how you don't see the fallacy immediately. You're implicitly assuming that all data is factual and that therefore training an LLM on cryptographically random data will create an intelligence that learns properties of the real world. You're conflating a property of the training data and transferring it onto LLMs. If you feed flat earth books into the LLM, you will not be told that earth is a sphere and yet that is what you're claiming here (the flat earth book LLM telling you earth is a sphere). The statement is so illogical that it boggles the mind.