This is what VLA models are for. They would work much better. Would need a bit of fine tuning but probably not much. Lots of literature out there on using VLAs to control drones.
The detection prepass plus text reasoning pipeline is effectively a perception to symbol translation layer, and that is where most of the brittleness will hide. Once you collapse a continuous 3D scene into discrete labels, you lose uncertainty, relative geometry, and temporal consistency unless you explicitly model them. The LLM then reasons over a clean but lossy world model, so action quality is capped by what the detector chose to surface.
The failure mode is not just missed objects, it is state aliasing. Two physically different scenes can map to the same label set, especially with occlusion, depth ambiguity, or near boundary conditions. In control tasks like drone navigation, that can produce confident but wrong actions because the planner has no access to the underlying geometry or sensor noise. Error compounds over time since each step re-anchors on an already simplified state.
Are you carrying forward any notion of uncertainty or temporal tracking from the vision stage, or is each step a stateless label snapshot fed to the reasoning model?
LLM's seem like the wrong platform to operate a drone in my opinion. I would expect that to be something more like a gaming engine. It should be small, simple, low latency and maybe based on a first person shooter running on insane difficulty. Small enough to fit in a tiny firmware space. It should boot so fast the firmware could be upgraded mid-flight without missing a beat. Give it simple friend or foe and obliterate anything not green.
I don't understand. Surely training an LSTM with sensor input is more practical and reasonable way than trying to get a text generator to speak commands to a drone.
On the discussion of the right or wrong tool, I find it possible that the ability to reason towards a goal is more valuable in the long run than an intrinsic ability to achieve the same result. Or maybe a mix of both is the ideal.
This is neat! It's a bit amusing in that I worked on a somewhat similar project for my phd thesis almost 10 years ago, although in that case we got it working on a real drone (heavily customized, based on DJI matrice) in the field, with only onboard compute. Back then it was just a fairly lightweight CNN for the perception, not that we could've gotten much more out of the jetson TX2.
Why would you want an LLM to fly a drone? Seems like the wrong tool for the job -- it's like saying "Only one power drill can pound roofing nails". Maybe that's true, but just get a hammer
I’m guessing googles model has extensive Minecraft sandbox mode YouTube vids in its training which would exactly this perspective
I think it's fascinating work even if LLMs aren't the ideal tool for this job right now.
There were some experiments with embodied LLMs on the front page recently (e.g. basic robot body + task) and SOTA models struggled with that too. And of course they would - what training data is there for embodying a random device with arbitrary controls and feedback? They have to lean on the "general" aspects of their intelligence which is still improving.
With dedicated embodiment training and an even tighter/faster feedback loop, I don't see why an LLM couldn't successfully pilot a drone. I'm sure some will still fall of the rails, but software guardrails could help by preventing certain maneuvers.
I am curious how these models would perform and how much energy they'd take to semi-realtime detect objects: SmolVLM2-500M - Moondream 0.5B/2B/2.5B - Qwen3-VL (3B) https://huggingface.co/collections/Qwen/qwen3-vl
I am sure this is already worked on in Russia, Ukraine and The Netherlands. A lot can go wrong with autonomous flying. One could load the VLM on a high end android phone on the drone and have dual control.
> I gave 7 frontier LLMs a simple task: pilot a drone through a 3D voxel world and find 3 creatures.
> Only one could do it.
If I understood the chart correctly, even the successful one only found 1/6 of the creatures across multiple runs.
At least he's not feeding real drones to the coyotes... oh, there's a link in the readme https://github.com/kxzk/tello-bench
In a real world test you would have a tool call for the LLM which is a bit high level like GoTo(object) and the tool calls another program which identities the objects in frame and uses standard programs to go to that.
Interesting. In some benchmarks I even see flash outperforming thinking in general reasoning.
Gemini Flash beats Gemini Pro? How does that work?
Gemini Pro, like the other models, didn't even find a single creature.
LLMs are trained on text. Why would we expect them to understand a visual and tactile 3D world?
I can’t really take this too seriously. This seems to me to be a case of asking “can an LLM do X?” Instead, the question is like to see is: “I want to do X, is an LLM this right tool?”
But that said, I think the author missed something. LLMs aren’t great at this type of reasoning/state task, but they are good at writing programs. Instead of asking the LLM to search with a drone, it would be very interesting to know how they performed if you asked them to write a program to search with a drone.
This is more aligned with the strengths of LLMs, so I could see this as having more success.
This sounds like a good way to get your drone shot down by a Concerned Citizen or the military.
LLMs flying weaponized drones is exactly how it starts.
"drone"
Gemini 3 is the only model I've found that can reason spatially. The results here are accurate to my experiments with putting LLM NPCs in simulated worlds.
I was surprised that most VLLMs cannot reliably tell if a character is facing left or right, they will confidently lie no matter what you do (even gemini 3 cannot do it reliably). I guess it's just not in the training data.
That said Qwen3VL models are smaller/faster and better "spatially grounded" in pixel space, because pixel coordinates are encoded in the tokens. So you can use them for detecting things in the scene, and where they are (which you can project to 3d space if you are running a sim). But they are not good reasoning models so don't ask them to think.
That means the best pipeline I've found at the moment is to tack a dumb detection prepass on before your action reasoning. This basically turns 3d sims into 1d text sims operating on labels -- which is something that LLMs are good at.