I have linked here previously to the wonderful smart writings of Professor Kenneth Anderson on some of the moral and legal issues arising from modern warfare. See eg here.

He now looks at legal and ethical issues facing (sic) future robot soldiers. See how he starts by cleverly framing the issues in an unexpected way, going to the very process of considering the subject:

The regulation of lethal autonomous weapons can be approached from two directions. One is to look from the front-end – starting from where technology stands today, forward across the evolution of the technology, but focused on the incremental changes as and how they occur, and especially how they are occurring now.

The other is to imagine the end-state – the necessarily speculative and sometimes pure sci-fi “robot soldiers” of this post’s title – and look backwards to the present. If we start from the hypothetical technological end-point – a genuinely “autonomous,” decision-making robot weapon, rather than merely a highly “automated” one – the basic regulatory issue is, what tests of law and ethics would an autonomous weapon have to pass in order to be a lawful system, beginning with fundamental law of war principles such as distinction and proportionality? What would such a weapon be and how would it have to operate to satisfy those tests?

This is an important conceptual exercise as technological innovators imagine and work toward autonomy in many different robotic applications, in which weapons technology is only one line of inquiry.

And on he goes from there. No point in lifting out slabs of it. You just have to read the whole brilliant thing, not least his explanation of why ‘robotising’ parts of a process (eg an unmanned flying platform) leads towards robotising other parts of it (shooting from the flying platform), as the robotised bits are capable of working faster than humans can  But see his core point:

The facts about how technology of automation is evolving are important for questions of regulating and assessing the legality of new weapons systems. In effect, they shift the focus away from imagining the fully autonomous robot soldier and the legal and ethical tests it would have to meet to be lawful – back to the front end, the margin of evolving technology today.

The bit-by-bit evolution of the technology urges a gradualist approach to regulation; incremental advances in automation of systems that have implications for weapons need to be considered from a regulatory standpoint that is itself gradualist and able to adapt to incremental innovation. For that basic reason, Matt’s and my paper takes as its premise the need to think incrementally about the regulation of evolving automation.

In other words, don’t try now to fix a clear set of norms, as they are unlikely to match the best outcome and may even thwart getting to it. Watch what happens, learn what it means, evolve a response. Quite a good approach on ‘climate change’ issues too?