There's also a chasm of (non-)accountability.
You or your subordinates target an elementary school: that's a war crime.
Your "battlefield AI" targets an elementary school: software bug, it happens, can't be helped.
The software is never accountable, so the human running it is always accountable.
"War Crimes" only apply to the loser of the war and are prosecuted by the victor.
Meaning whatever horrors are done on either side, only the horrors committed by the loser will be "crimes". The inclusion of AI doesn't change that.
This isn't even that new. Part of the motivation for building autonomous nuclear response programs during the cold war was specifically to remove accountability, and guilt, from human operators. But AI does bring it to a new level.