Concerns at the prospect of fully autonomous weapons featured prominently during the high-level Munich Security Conference in Germany this month. More than 600 politicians, business leaders, and officials attended the annual conference held at the Bayerischer Hof Hotel in the center of the city on 15-18 February.
A public event on artificial intelligence and modern conflict organized by the conference saw common views emerge from different perspectives against weapons that, once activated, could identify, select and attack targets without further human intervention. The event opened with remarks by a “robot” and featured a panel with a president, a general, a former NATO head, and a representative from a coalition of the non-governmental organizations, the Campaign to Stop Killer Robots.
Media outlets have widely covered the remarks at the event by the head of Germany’s cyber command, Lieutenant General Ludwig Leinhos, who said: “We have a very clear position. We have no intention of procuring” fully autonomous weapons.”
Some have described this comment as a new official German position in support of banning such fully autonomous weapons. Others claim that Australia, Canada, and the United Kingdom are similarly wholeheartedly behind the ban call.
However, the Campaign to Stop Killer Robots is not about to add these states to its running list of 22 countries that have endorsed the call to ban fully autonomous weapons. That’s in part because statements about an intention not to acquire such weapons are not the same as committing to and actively working toward a legally binding preemptive ban. And in part because it is not clear what Germany or the UK or some others mean by fully autonomous weapons systems, particularly what level of human control is required.
Any statements renouncing these weapons systems are welcome and show how the discourse and the debate within the armed forces of various countries are is increasingly focusing not only questions relating to the legality of fully autonomous weapons, but the much bigger concerns. Yet statements alone are insufficient to deal with all the challenges raised about this far-reaching move towards greater autonomy in weapons systems.
Binding legislation is required in the forms of a new international treaty and national laws to retain meaningful human control over future weapons systems and individual attacks. Doing so will require that states draw the line to preemptively ban development, production, and use of fully autonomous weapons.
Campaign to Stop Killer Robots coordinator Mary Wareham addressed the Munich AI panel, highlighting the multiple ethical, legal, operational, moral, proliferation, technical and other challenges raised by allowing machines to take a human life on the battlefield or in policing, border control, and other circumstances.
Another co-panelist, former NATO Secretary-General Anders Fogh Rasmussen, spoke eloquently in favor of banning the production and use of what governments have come to call “lethal autonomous weapons systems.” He predicted that without a legal prohibition these weapons could create instability and warned: “Soon, you may see swarms of robots attacking a country …The robots can be easily deployed, they don’t get tired, they don’t get bored.”
The last co-panelist, Estonia’s President Kersti Kaljula, did not directly address concerns raised by fully autonomous weapons, but told a subsequent Munich panel that the United Nations General Assembly should vote on “banning artificial intelligence for military purposes.”
Such proposals should be considered as they could help spur more urgent and concrete action. There’s a need to focus greater attention on the ongoing diplomatic process at the Convention on Conventional Weapons (CCW) in Geneva, where some 90 countries are considering what to do about lethal autonomous weapons systems. The campaign has criticized the CCW talks for aiming too low and going to slow, but with political will, rapid progress is possible.
During another Munich discussion, Eric Schmidt from Google Alphabet was pressed for his views on the call to ban fully autonomous weapons.Schmidt didn’t express support for legally-binding measures, but found that “these technologies have serious errors in them and should not be used in life decisions.” He said there are “too many errors” and they shouldn’t be “put in charge of command and control.”
As the director of the venerated Stockholm International Peace Research Institute (SIPRI) in a think piece for Munich points out “negotiations on autonomy in weapons systems can be accelerated and the development of the technology slowed.” That may be the case, but time is running out.
“Sophia the robot” opened the AI event in Munich and introduced the panelists. Widely viewed by the robotics community as “a fake show robot” its creator David Hansen previously worked for Disney animatronics. Last year, Sophia fronted a UN summit on “AI for Good” and attracted controversy after it was granted citizenship by the Saudi Arabia.
Sophia gave Munich Security Conference chair Wolfgang Ischinger and moderator David Sanger scripted answers to predetermined questions. Some of the responses were borderline racist (asking why the audience of mainly youth were not wearing “lederhosen”) and sexist (scolded by the moderator for “flirting” after saying “nice uniform” to the general).
Most were vacuous — “hurting someone is not OK unless you have to defend yourself” and “robots could potentially protect humans from each other.” Or misleading. Upon inviting the Campaign to Stop Killer Robots coordinator to the stage, Sophia remarked, “Please let me assure you that I am not a killer robot.” We never claimed it is.
Several media outlets covered the issue of killer robots during the Munich Security Conference, including POLITICO Europe, Reuters, Futurism, and Euronews, and German outlets Netzpolitik, Suddeutsche Zeitung, and Heise.
Watch the video of the public event.
Photo credit: Munich Security Conference / Kuhlmann, 15 February 2018