logoalt Hacker News

martythemaniaktoday at 6:21 PM1 replyview on HN

As the article notes regular Gemini and Gemma also have spatial reasoning capabilities, which I decided to test by seeing if Gemini could drive a little rover successfully, which it mostly did: https://martin.drashkov.com/2026/02/letting-gemini-drive-my-...

LLMs are really good at the sort of tasks that have been missing from robotics: understanding, reasoning, planning etc, so we'll likely see much more use of them in various robotics applications. I guess the main question right now is:

- who sends in the various fine-motor commands. The answer most labs/researchers have is "a smaller diffusion model", so the LLM acts as a planner, then a smaller faster diffusion model controls the actual motors. I suspect in many cases you can get away with the equivalent of a tool call - the LLM simply calls out a particular subroutine, like "go forward 1m" or "tilt camera right"

- what do you do about memory? All the models are either purely reactive or take a very small slice of history and use that as part of the input, so they all need some type of memory/state management system to actually allow them to work on a task for more than a little while. It's not clear to me whether this will be standardized and become part of models themselves, or everyone will just do their own thing.


Replies

colinatortoday at 6:29 PM

For the fine-motor commands: or, the model can write the code to generate them on the fly. It seems to work, in my very limited experiment.

As for memory: my approach is to give the robot a python repl and, basically, a file system - the LLM can write modules, poke at the robot via interactive python, etc.

Basically, the LLM becomes a robot programmer, writing code in real-time.