> it simply does not require much intelligence (relatively speaking) to build something that points a gun at something and pulls a trigger?
I could not disagree more. A big part of that is also knowing when NOT to pull the trigger. And it’s much harder than you’d think. If you think full self driving is a difficult task for computers, battlefield operations are an order of magnitude more complex, at least.
Yes, but it doesn’t have to be error-free. The friendly fire rates in symmetrical hot wars is pretty high, it’s considered a cost of going to war.
If autonomous weapons lead to a net battlefield advantage, I agree with the GP, they will be used. It is the endgame.
The big asterisk in what you're saying is, like self driving cars, it's hardest when you want to be the most precise and limit the downsides. In this paradigm, false positives and false negatives have a very big cost.
If you simply wanted to cause havoc and destruction with no regard for collateral damage then the problem space is much more simple since you only need enough true positives to be effective at your mission.
The ability to code with ai has shown that it requires an even higher level of responsibility and discipline than before in order to get good results without out of control downside. I think the ability to kill with ai would be the same way but even more severe.
> A big part of that is also knowing when NOT to pull the trigger
"In a press conference, Musk promised that the Optimus Warbots would actually, definitely, for real, be fully autonomous in two years, in 2031. He also extended his condolences to the 56 service members killed during the training exercise"
And the US learned the lesson the hard way in Iraq that in fact even human intelligence struggles with this. There were major problems throughout the war with individual soldiers not adhering to the published rules of engagement.
We have fully autonomous weapons, and had them for over a century. We call them "landmines".
I expect autonomous weapons of the near future to look somewhat similar to that. They get deployed to an area, attack anything that looks remotely like a target there for a given time, then stand down and return to base. That's it.
The job of the autonomous weapon platform isn't telling friend from foe - it's disposing of every target within a geofence when ordered to do so.