Artificial intelligence experts call for ban
More than 3,000 artificial intelligence researchers, scientists, and related professionals have signed an open letter released in Buenos Aires on 28 July 2015 calling for a ban on autonomous weapons that select and engage targets without human intervention, thereby swelling the ranks of the rapidly growing global movement to address the weapons. The Campaign to Stop Killer Robots welcomes the call, which is available on the website of the Future of Life Institute.
The letter is being presented today (28 July 2015) at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina.
The letter’s signatories include more than 14 current and past presidents of artificial intelligence and robotics organizations and professional associations such as AAAI, IEEE-RAS, IJCAI, and ECCAI. They include Google DeepMind chief executive Demis Hassabis and 21 of his lab’s engineers, developers and research scientists. Much media attention has focused on the high-level signatories to the letter, such as Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallin, Professor Stephen Hawking, and Professor Noam Chomsky.
Notable female signatories to the open letter include Higgins Professor of Natural Sciences Barbara Grosz of Harvard University, IBM Watson design leader Kathryn McElroy, Professor Martha E. Pollack of the University of Michigan, Professor Carme Torras of the Robotics Institute at CSIC-UPC in Barcelona, Professor Francesca Rossi of Padova University and Harvard, Professor Sheila McIlraith of the University of Toronto, Professor Allison M. Okamura of Stanford University, Professor Lucy Suchman of Lancaster University, Professor Bonnie Weber of Edinburgh University, and Professor Mary-Anne Williams of the University of Technology Sydney.
Several signatories to the FLI letter calling for a ban addressed an April 2015 meeting of governments on “lethal autonomous weapons systems,” including Professor Stuart Russell of the University of California at Berkeley and Professor Heather Roff of Denver University in Colorado. The next milestone for the international debate over autonomous weapons will be November 13, when states attending the annual meeting of the Convention on Conventional Weapons decide by consensus on whether to continue and deepen their talks on the topic.
According to the open letter, “autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.” It finds that “Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades.”(italics added)
The signatories identify the “key question for humanity today” as being “whether to start a global AI arms race or to prevent” one, because “if any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons.” The signatories “believe that a military AI arms race would not be beneficial for humanity” and observe that AI “can make battlefields safer for humans, especially civilians, without creating new tools for killing people.” They find that autonomous weapons may not require “costly or hard-to-obtain raw materials” making them likely to become “ubiquitous and cheap for all significant military powers to mass-produce” as well as “appear on the black market” and in the hands of terrorists, dictators, and warlords, etc.
As AI researchers, the signatories compare themselves to chemists, biologists, and physicists who had “no interest in building chemical or biological weapons” and therefore supported the treaties banning chemical and biological weapons, as well as space-based nuclear weapons and blinding laser weapons.
The signatories observe that “AI researchers have no interest in building AI weapons and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits.” To prevent “a military AI arms race,” the signatories therefore call for “a ban on offensive autonomous weapons beyond meaningful human control.”
The letter has attracted strong media interest, from the Guardian to The New York Times. Max Tegmark, an MIT professor and a founder of the Future of Life Institute told VICE Motherboard, “This is the AI experts who are building the technology who are speaking up and saying they don’t want anything to do with this.”
UPDATE: As of 15 January 2016, a total of 3,037 artificial intelligence and robotics researchers had signed the open letter calling for a ban on autonomous weapons as well as more than 17,000 other endorsers.
For more information, please see:
- The open letter and see the list of signatories
- The call by campaigners in Colombia for Latin American AI researchers to sign the open letter
- Our January 2015 webpost on the open letter urging robust and beneficial artificial intelligence
- Ian Kerr, “A ban on killer robots is the ethical choice,” Ottawa Citizen, 31 July 2015.
- Stuart Russell, Max Tegmark, Toby Walsh, “Why We Really Should Ban Autonomous Weapons: A Response,” IEEE Spectrum, 3 August 2015.
- Stuart Russell interview on NPR’s “All Things Considered”
Photo: Participants at a Future of Life Institute conference on the “Future of Artificial Intelligence” held in Puerto Rico in January 2015 (c) Future of Life Institute, 2015.