I 100% understand and agree the AI community argument around lethal autonomy.
But I am trying to understand this from the perspective of defence & govt. Why is it so business as usual for them? Do they consider this at par with missiles with infra-red/heat sensors for tracking/locking? Where does the definition of lethal autonomy begin and end?
Just putting this out there as a point to ponder on. By itself, this may rightly be too broad and should be debated.
My most straightforward read is that the military simply doesn't want their contractors to have a say in the war doctrine. Raytheon doesn't get to say "you can only bomb the countries we like, and no hitting hospitals or schools". It doesn't necessarily mean the Pentagon wants to bomb hospitals, but they also don't want to lose autonomy.
A less charitable interpretation is that the current doctrine is "China / Russia will build autonomous killbots, so we can't allow a killbot gap".
I'm frankly less concerned about "proper" military uses than I am about the tech bleeding into the sphere of domestic law enforcement, as it inevitably will.
The military has deployed lethal autonomous weapons since at least 1979. LLMs might be useful for certain missions but from a military perspective they're nothing fundamentally new.
https://www.vp4association.com/aircraft-information-2/32-2/m...
> But I am trying to understand this from the perspective of defence & govt.
Hum...
The one thing domestic surveillance enables is defining targets inside the country, and the one thing lethal autonomy enables is executing targets that a soldier would refuse to.
Those things don't have other uses.
For surveillance at least, multimodal AI is old hat: https://en.wikipedia.org/wiki/Sentient_(intelligence_analysi...
If you're one of the contractors working in NRO or aware of Sentient, OpenAI and Anthropic probably do look like supply chain risks. They want to subsume the work you're already doing with more extreme limitations (ones that might already be violated). So now you're pitching backup service providers, analyzing the cost of on-prem, and pricing out your own model training; it would be really convenient if OpenAI just agreed to terms. As a contractor, you can make them an offer so good that it would be career suicide to refuse it.
Autonomous weapons are a horse of a different color, but it's safe to assume the same discussions are happening inside Anduril et. al.
I don't think it's about lethal autonomy specifically as much as it's just about government autonomy period. They don’t think private companies should have any veto power over how the government uses some technology they're provided.
On its face that’s not a crazy stance: Governments are meant to represent the public, while private companies obviously aren't. I think it’s somewhat understandable why the government might reject that kind of "we know better than you" type of clause.
Of course, the reaction is wildly out of proportion. A normal response would just be to stop doing business with the company and move on. Labeling them a supply chain risk is an extreme response.