> fully autonomous killbots
I keep hearing this but it should be plainly obvious to everyone (at least here) that an LLM is not the right AI for this use case. That's like trying to use chatgpt for an airplane autopilot, it doesn't make sense. Other ML models may but not an LLM. Why does the "autonomous killbot" thing keep getting brought up when discussing Anthropic and other llm providers?
For reference, "autonomous killbots" are in use right now in the Ukraine/Russia war and they run on fpv drones, not acres of GPUs. Also, it should be obvious that there's a >90% probability every predator/reaper drone has had an autonomous kill mode for probably a decade now. Maybe it's never been used in warfare, that we know of, but to think it doesn't exist already is bonkers.
It's almost a silly distinction since ML has been used in weapons for quite a while. For example: Javelin missiles have automatic target recognition, cruise missles have intelligent terrain following, long range drones use algos like SLAM for guidance.
It wouldn't make sense to have the LLM try to do the target recognition, trajectory planning, or motor control. It might make sense to have the LLM at a higher level handling monitoring of systems and coordination with other instances, to provide more flexibility to react to novel situations than rules bases systems.