December 18, 2015
For the Campaign to Stop Killer Robots, the highlight of 2015 was the second meeting on lethal autonomous weapons systems held at the United Nations in Geneva in April. Representatives from more than 90 countries as well as UN agencies, the International Committee of the Red Cross, and the campaign convened at the Convention on Conventional Weapons (CCW) to engage in five days of substantive deliberations with invited experts on ethical, legal, operational, security, technical, and other challenges raised by these weapons.
The campaign has prepared a report on this meeting, which was substantive in content but did not take any decisions. The meeting chair Ambassador Michael Biontino of Germany also produced a 23-page report of the meeting in his personal capacity compiling the main areas of interest and proposals for further work. Last month, nations agreed to hold a third meeting on autonomous weapons at the CCW on 11-16 April 2016, ahead of a milestone five-year Review Conference of the CCW at the end of the year.
It’s clear that a process of “intense discussions” on autonomous weapons concerns is firmly underway, but it has lacked ambition, sufficient time, and—until now—an objective. Convening for one week each year for three years to consider this topic is an insufficient response to the concerns raised and typical of the CCW’s usual aim low and go slow approach.
The one new aspect of the CCW’s new mandate of work in 2016 is a proposal that states use the April meeting–which Biontino will chair again–to “agree by consensus on recommendations for further work” for the CCW on lethal autonomous weapons systems. States should agree on substantive principles as well as the way forward on lethal autonomous weapons systems at the CCW’s Fifth Review Conference, which ban champion Pakistan will preside over on 12-16 December 2016. The recommendations are a noteworthy improvement on the previous mandates as they imply states are willing to work toward an outcome rather than simply discuss questions. They also provide a strong indication that the process on autonomous weapons will continue at the CCW in 2017.
The Campaign to Stop Killer Robots has continued to play a central role in 2015 in building and sustaining attention in autonomous weapons concerns. It is also keeping the pressure on states to do more than simply talk about the challenges by, for example, developing national policy on the weapons.
Monitoring and recording government positions is another part of the campaign’s work. According to its count, Bolivia, Ghana, State of Palestine, and Zimbabwe called for a preemptive ban on lethal autonomous weapons systems during 2015, adding to the ban calls by Cuba, Ecuador, Egypt, Holy See, and Pakistan since 2013.
Some 65 states have articulated their views on autonomous weapons concerns, with most expressing the need to affirm the principle of human control over the selection of targets and use of force, indicating they see a need to draw the line at some point. Indeed as Germany stated in October there is now a common understanding that machines shouldn’t be allowed to take life and death decisions on the battlefield.
During 2015, 14 nations elaborated their concerns with autonomous weapons, including Iraq, which recognized the danger posed and said the weapons should be regulated or prohibited altogether. Zimbabwe said that it was joining like-minded delegations to support the call to preemptively ban lethal autonomous weapon systems because it saw “merit and wisdom in doing what is right and necessary to safeguard this and future generations” from the weapons.
2015 was also the year that the artificial intelligence (AI) community firmly threw its substantial weight firmly behind the call to preemptively ban autonomous weapons, adding to the scientists and roboticists of the International Committee for Robot Arms Control (ICRAC), a co-founder of the Campaign to Stop Killer Robots.
In January, Steve Goose and Ken Roth from Human Rights Watch talked with AI experts at the annual conference of the Association for the Advancement of Artificial Intelligence in Austin, Texas and at the World Economic Forum in Davos, Switzerland respectively, while ICRAC’s Heather Roff presented on autonomous weapons at a conference in Puerto Rico for prominent scientists and AI researchers from industry and academia. These and other initiatives helped pave the way for an open letter calling for a ban on autonomous weapons that was released at the Joint Conference on Artificial Intelligence in Buenos Aires on 28 July 2015.
Within days the AI call for a ban on autonomous weapons was signed by more than 2,800 artificial intelligence experts, including 14 current and past presidents of AI and robotics organizations and professional associations such as AAAI, IEEE-RAS, IJCAI, and ECCAI. Google DeepMind chief executive Demis Hassabis signed, as did 21 of his engineers, developers and research scientists. Much media attention focused on prominent signatories Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallin, and Professor Stephen Hawking. Notable female signatories include Higgins Professor of Natural Sciences Barbara Grosz of Harvard University and IBM Watson design leader Kathryn McElroy.
The letter generated more than 1,800 media articles in outlets across the world. Signatory Professor Toby Walsh of the University of New South Wales generated further media interest, particularly in Australia, when he joined Campaign to Stop Killer Robots outreach at UNGA First Committee on Disarmament in New York in October 2015.
Public interest in tackling killer robots continued to build in 2015. The Vancouver-based Open Robotics Initiative conducted a poll of 1,000 people in 54 countries that it published in November and its representative AJung Moon presented to the annual CCW meeting. The survey found that 67 percent of its participants support a preemptive ban on lethal autonomous weapons systems. According to the survey’s report, “results indicate that our survey participants are reluctant to endorse the development and use of LAWS … and suggest that more international public engagement is necessary to support democratic decisions about what is appropriate when developing and using robotic weapons technologies.”
Public awareness and calls for more action to address autonomous weapons concerns is needed now, not least to help stimulate the creation of policy. Only two nations have issued policy on autonomous weapons systems: a 2012 US Department of Defense directive that permits the development and use of fully autonomous systems that deliver only non-lethal force and a 2013 statement by the UK Ministry of Defence that it has “no plans to replace skilled military personnel with fully autonomous systems.”
Canada is one country where expectations for government action on killer robots are running high. The University of Ottawa’s School of Law convened a public talk on 5 November by key campaigners attended by diplomats who helped steer Canada’s leadership of the Ottawa process that established the 1997 Mine Ban Treaty. It appears the new government led by Justin Trudeau will work to fulfill its goal of forming policy on autonomous weapons in cooperation with civil society experts. At the Halifax International Security Forum later that month, Canada’s new defense minister said that legislators must deal with ethical dilemmas posed by developments in military technology, which will otherwise burden soldiers.
In 2016, the Campaign to Stop Killer Robots intends to deepen its national outreach in Canada and other countries such as Germany, Japan, the Netherlands, Norway, and the US, as well as in the Global South.
Please consider supporting the campaign’s work with a donation or better yet, support us both financially and with your active efforts to ban killer robots. Follow us on Twitter and Facebook. Check out the calendar of events.
For more information, please see: