Yesterday (21 January 2015), the World Economic Forum convened an hour-long panel discussion in cooperation with TIME to consider “what if robots go to war?” The session at the annual meeting in Davos, Switzerland featured four speakers: former UN disarmament chief Angela Kane, BAE Systems chair Sir Roger Carr, artificial intelligence (AI) expert Stuart Russell and robot ethics expert Alan Winfield.
Despite their different backgrounds, all the participants agreed that autonomous weapons systems pose dangers and require swift diplomatic action to negotiate a legally-binding instrument that draws the line at weapons not under human control. Killer robot concerns were raised in other panels, indicating a high level of interest in the topic at Davos.
The panelists were clear on what a killer robot is and what it’s not. Russell said that a fully autonomous weapon locates or selects a target and fires without human intervention. Carr explained that it would be able to identify and select a target, adjust its behavior if necessary, and deploy force [fire] without any human intervention. Kane observed there is not yet a formal a definition of fully autonomous weapon, but also cautioned states to “not spend too much time on it.”
Carr described “the potential of a $20 billion market” as some 40 countries race to develop autonomous weapons. The Campaign to Stop Killer Robots has listed at least six states as researching, developing, or testing autonomous weapons: US, China, Israel, South Korea, Russia, and the UK. To understand the difference it would help if Carr could elaborate on the criteria for and countries on the longlist.
All panelists agreed that keeping killer robots in the hands of responsible governments and out of the hands of non-state armed groups will be challenging if not impossible. Other strategic considerations include the so-called deterrence value, which panelists dismissed. Russell and Carr both observed that fully autonomous weapons will be not be able to abide by the laws of war, and no one made the case that they could meet international humanitarian law requirements.
Ethical and moral concerns were among the sharpest concerns articulated, perhaps most surprisingly from arms industry representative Carr, who said fully autonomous weapons would be “devoid of responsibility” and would have “no emotion or sense of mercy.” He warned “if you remove ethics and judgement and morality from human endeavour, whether it is in peace or war, you will take humanity to another level which is beyond our comprehension.”
Winfield said it’s currently not technically possible to build an artificial system with moral agency. He said scientists ask why anyone would want to try as doing so crosses an “ethical red line.”
Panelists returned to the notion of human control several times, emphasizing its central importance in the emerging international debate. Carr said emphatically that BAE Systems and the UK government strongly believe that removing the human is “fundamentally wrong” and affirmed that nobody wants to allow machines to choose what, where, and how to fight.
Angela Kane, who is now a fellow with the Vienna-based Center for Disarmament and Non-Proliferation, described how states can use the Convention on Conventional Weapons (CCW) framework to negotiate another protocol on autonomous weapons. She criticized the pace of diplomatic deliberations in this forum since 2014 as “glacial” and encouraged the US to “take the lead” diplomatically and “elevate the debate” as France and Germany have being doing on killer robots at the CCW. Russell described the next 18 months to two years as critical in achieving a negotiating process. Carr repeatedly several times that “a line needs to be drawn” not least because others will likely use killer robots irresponsibly.
Carr described political leadership as a “big challenge” to getting traction to address killer robots because “the people who make judgements about legislation do not fully understand the challenge or process” and recommended engaging legislators in a risk education process. Winfield called on audience members and the public to contact their elected representative or member of parliament and tell them that killer robots are not acceptable for “we the people.”
Prior to and during the panel, Davos ran a three-question poll on killer robots. During the panel, the first results were presented, finding that 88% of people in the room at Davos and 55% outside agreed that if attacked they should be defended by autonomous weapons rather than by human “sons and daughters.” The panelists dismissed this finding as naive. Winfield said it revealed “extraordinary misplaced and misguided confidence in AI systems” because “even a well-designed robot will behave chaotically and make mistakes in a chaotic environment.” There is also concern the public buys into media hype about potential benefits and places too much trust in such systems, while generations are becoming desensitized to war by video games.
According to an article by Russell published on the 2016 Davos website, many members of the World Economic Forum’s Global Agenda Council on Artificial Intelligence and Robotics have joined more than 3,000 AI experts in signing a 2015 open letter calling for a ban on autonomous weapons.
The 2016 panel on killer robots is not the first time that Davos has considered the matter. In January 2015, a technology panel featuring Russell and Ken Roth of Human Rights Watch devoted considerable time discussing the matter.
For more information: