menu hero image
AOAV IGw AWS FINAL 150ppi_LR

Chatham House conference

AOAV IGw AWS FINAL 150ppi_LR

The first Chatham House conference on autonomous military technologies in London on 24-25 February brought together individuals from different constituencies to contemplate autonomous weapons and the prospect of delegating human control over targeting and attack decisions to machines. The Campaign to Stop Killer Robots was pleased to be able to attend this well-organized and timely conference held under the Chatham House rule, which permits participants to use information received but not to reveal the identity or affiliation of the speaker or participants. The conference was a useful opportunity to discuss our concerns with fully autonomous weapons, provide clarifications, and answer questions about our coalition’s focus and objectives.

Some participants have since publicly provided their views on the conference, including Charles Blanchard on Opinio Juris (4 March) and Paul Scharre on the Lawfare blog (3 March).

Several of the Campaign to Stop Killer Robots representatives who attended the Chatham House conference have provided input for this web post, including on the reflections published by Blanchard and Scharre. The campaign’s principal spokespersons Nobel Peace laureate Jody Williams, roboticist Professor Noel Sharkey, and Human Rights Watch arms director Steve Goose addressed the conference, while campaigners were present from the non-governmental organizations Action on Armed Violence, Amnesty International, Article 36, Human Rights Watch, International Committee for Robot Arms Control, and PAX (formerly IKV Pax Christi).

The perspective of the Campaign to Stop Killer Robots and its call for a ban on fully autonomous weapons were heard throughout the conference, but to ensure that key concerns are not downplayed and in the spirit of furthering common understanding on this emerging issue of international concern, we have the following comments on the reflections by Blanchard and Scharre.

Blanchard, a former US Air Force general counsel, gave a public talk on the topic “Autonomous Technologies: A Force for Good?” at Chatham House together with our campaign spokesperson Jody Williams, who received the 1997 Nobel Peace laureate together with the International Campaign to Ban Landmines (ICBL). He is now a partner at Arnold & Porter LLP, a Washington DC law firm that actively supported the negotiation of the 2006 Disability Rights Treaty as well as efforts to include victim assistance provisions in the 1997 Mine Ban Treaty.

Blanchard considers “deep philosophical viewpoints” in his piece, which looks at some of the “disputes” at the Chatham House conference over the call for a ban on fully autonomous weapons to enshrine the principle that only humans should decide to kill other humans. Blanchard is concerned that “more death” may result from a ban because autonomous weapons might be “more capable than humans” of complying with the laws of war.

While we do not agree with Blanchard’s skeptical position as to the benefits that a ban on fully autonomous weapons could provide, we welcome his acknowledgement of the counter-argument that letting a machine decide whom to kill would violate notions of human dignity. Blanchard’s assessment of the viability of a ban illustrates how the debate has advanced far in recent months to the point that a ban is being seriously contemplated.

Paul Scharre heads the 20YY Warfare Initiative at the Center for a New American Security in Washington DC and previously worked for the US Department of Defense, where he led a working group that drafted the 2012 policy directive 3000.09 on autonomy in weapon systems. His comprehensive presentations at the Chatham House conference were well-received, and his rational, measured and well-written reflections on the conference contain many useful observations.

Yet Scharre’s “key takeaways” oversimplify the “areas of agreement” and make it sound as if participants agreed more often than they actually did. His commentary attempts to reflect the conference speakers’ views and areas of convergence, but the same cannot be done for the audience—comprising approximately 150 participants from government, military, industry, think tanks, academia, civil society, and media.

With respect to the scope of what was discussed at the Chatham House, Scharre’s depiction of the conference being focused only on “anti-material” autonomous weapons systems is confusing, as the conference addressed all types of autonomous weapons systems, including “anti-personnel.” The conference was also not specifically limited to “lethal” autonomous weapons as opposed to “non-lethal” or “less-than-lethal.” That said, we welcome the comments by Scharre indicating that he is not in favor of fully autonomous anti-personnel weapon systems.

There was indeed convergence by the technologists who spoke to the capabilities of current autonomous technologies and the notion that precursors indicate something more dangerous to come.

Throughout the conference there did appear to be “universal agreement that humans should remain in control of decisions over the use of lethal force.” Consensus on this point was, however, qualified by a number of speakers who suggested that systems with no meaningful human control could be legal and have military utility. Such views illustrate why policy-level restraints will not suffice in addressing the challenges posed by fully autonomous weapons and should be supplemented with new law.

Indeed, this debate is happening because many are contemplating a future with no human control. Yet Scharre gave minimal consideration to proliferation concerns—development, production, transfer, stockpiling—in the “objections” section of his reflection. Concerns over an arms race were raised several times in the course of the Chatham House conference, which was sponsored by BAE Systems, manufacturer of the Taranis autonomous aircraft, the prime example of a UK precursor to autonomous weapons technology. As has been learned from experience with nuclear weapons, proliferation concerns cannot be addressed permanently through regulation and existing international humanitarian law.

Scharre claims that “a major factor in whether autonomous weapons are militarily attractive or even necessary may be simply whether other nations develop them,” but he seems to misunderstand the point of stigmatization in the “endgame” section of his reflections. By proposing that that the answer to concerns about “cheating” is an “even playing field” where everyone can have them (and presumably all can be “cheaters”), Scharre dismisses the power of an international, legally binding ban to stigmatize a weapon and ensure respect for the law. A global ban could succeed in stigmatizing autonomous weapons to the extent that no major military power uses them, as has been the case for the Mine Ban Treaty where major powers have not used antipersonnel landmines in years.

Scharre views the commercial sector as driving the “underlying technology behind autonomy,” but that ignores that fact that industry is regulated by the state. Governments won’t prevent industry from developing the underlying technology nor–as Blanchard notes–is the campaign seeking to do that because the same technology that will be used in autonomous robotics and AI systems has many non-weapons and non-military purposes. But research and development activities should be banned if they are directed at technology that can only be used for fully autonomous weapons or that is explicitly intended for use in such systems.

Scharre downplays legal concerns in several sections of his reflections. This is in part because the conference panel on international law was dominated by legal advocates of autonomous weapons. Several of the law panelists may have agreed with each other that autonomous weapons are “not illegal weapons prohibited under the laws of armed conflict,” but this was not a view shared by all participants at the conference. In particular, serious concern was expressed about the nature of fully autonomous weapons and their likely inability, in making attack decisions, to distinguish noncombatants and judge the proportionality of expected civilian harms to expected military gains. Although no one can know for sure what future technology will look like, the possibility that fully autonomous weapons would be unable to comply with the laws of war cannot be dismissed at this point.

One speaker argued that if fully autonomous weapons could lawfully be used in any circumstance, they could not be considered per se unlawful. This point may be correct legally, but the case can be made that any weapon can be used legally in some carefully crafted scenario.  The possibility of such limited use should not be used to legitimize fully autonomous weapons. History has well demonstrated that once a weapon is developed and fielded, it will not only be used in limited, pre-determined ways. The potential for harm is so great as to nullify the argument for legality.

Scharre claims agreement about “lawful limited uses,” citing three examples of his own. We certainly don’t agree.

Accountability is another area where there was less agreement than depicted in Scharre’s reflections. As he states, machines, as currently envisioned, can’t be held responsible under laws of war, and it makes sense that programmers or operators not be held liable for war crimes unless they intended the robot to commit one.

The notion of accountability for operators was touched on during the Chatham House conference, but it was not considered in depth and it is important to note the lingering concerns of some audience members. For example, the “fixes” that Scharre cites from the US Department of Defense directive fall far short. Under the directive, human decision makers are charged with responsibility for ensuring compliance with laws of war when the machines they set in motion are unable to ensure this. However, it is unlikely that commanders will be held liable for war crimes if unintended technical failures can be blamed, while programmers, engineers and manufacturers are unlikely to be held liable if they have acted in good faith.

Scharre’s apparent answer to the issue of accountability is a “completely predictable and reliable system,” but how is that possible? Even with rigorous test and evaluation procedures, autonomy will make it significantly harder to ensure predictability and reliability. In fact, one definition of autonomy is that the system, even when functioning correctly, is not fully predictable (due to its complexity and that of the environment with which it is interacting).

In addition, some question whether operators should be held directly responsible for the consequences of fully autonomous weapons’ actions.  Can these operators be treated in the same way as operators of a “normal” weapon when fully autonomous weapons are able to make choices on their own?

Scharre seems to dismiss the Martens Clause as only an ethical issue, but it’s a legal one as well.  Although its precise meaning is debated, the clause is a fixture of international humanitarian law that appears in several treaties. It implies that when there is no existing law specifically on point, weapons that “shock the human conscience” can be regarded as unlawful in anticipation of an explicit ban. It also supports adoption of an explicit ban of weapons that violate the “principles of humanity and dictates of public conscience.”

Scharre’s post raises a “practical” objection to fully autonomous weapons that was not considered by the conference: “A weapon that is uncontrollable or vulnerable to hacking is not very valuable to military commanders. In fact, such a weapon could be quite dangerous if it led to systemic fratricide.” This concern about “large-scale,” accidental killing is valid, but the same practical argument applies to civilian casualties and not just military ones.

As Scharre notes, there are many concerns with fully autonomous weapons that exist on several fundamentally different levels. We agree that discussions about where the technology is headed are critical, but finding a permanent solution is even more urgent.

The Chatham House event was the first of several important meetings due to be held on killer robots in 2014. The International Committee of the Red Cross (ICRC) will convene its first experts meeting on autonomous weapons systems on 26-28 March. The first Convention on Conventional Weapons (CCW) meeting on lethal autonomous weapons systems will be held at the UN in Geneva on 13-16 May. UN Special Rapporteur Christof Heyns is due to report on lethal autonomous robots and other matters to the Human Rights Council in Geneva during the week of 10 June.

The fact that conferences like the one held by Chatham House are happening shows how the challenge of killer robots has vaulted to the top rank of traditional multilateral arms control and humanitarian disarmament, validating the importance and urgency of the issue and undercutting arguments that fully autonomous weapons are “inevitable” and “nothing to worry about.” The strong and diverse turn-out means it is unlikely to be the last Chatham House conference on this topic.

Immediately after the Chatham House conference, the Campaign to Stop Killer Robots held a strategy meeting that 50 NGO representatives attended. The meeting focused on planning the campaign’s strategy for year ahead at CCW and the Human Rights Council as well as how to initiate national campaigning to influence policy development and secure support for a ban.

For more information see:

Photo: Patricia Lewis, research director for international security at Chatham House (center) introduced the first panel of the Chatham House conference on autonomous military technologies. (c) Campaign to Stop Killer Robots, 24 February 2014

mary

Image alt text
SKR dots icon

Stop killer robots

Join us

Keep up with the latest developments in the movement to Stop Killer Robots.

Join us