Meaningful Human Control, Artificial Intelligence and Autonomous Weapons
With the recent rise in concerns over ‘autonomous weapons systems’ (AWS), civil society, the international community and others have focused their attention on the potential benefits and problems associated with these systems. Some military planners see potential utility in autonomous systems – expecting them to perform tasks in ways and in contexts that humans cannot, or that they may help to save costs or reduce military casualties. Yet as sensors, algorithms and munitions are increasingly interlinked, questions arise about the acceptability of autonomy in certain ‘critical functions,’ particularly around identification, selection and application of force to targets. These concerns span ethical, legal, operational and diplomatic considerations.
The original publication can be found here.
Consideration of the key elements required for meaningful human control should provide a starting point for any assessment of developing technologies in the context of autonomous weapons systems.