May 11, 2021
As we begin to emerge from a difficult and disruptive year, we’re eager to reconnect and strategize for the year ahead—because the fight to ban killer robots is not over.
“We are all united in our vision to establish new laws to address autonomy in weapons,” explained Isabelle Jones (Campaign Outreach Manager, Campaign to Stop Killer Robots) at the start of the Campaign’s Global Meeting on 24 March. “We know that a legal treaty is needed to prohibit systems that cross a moral line and to make sure that all weapons systems operate under meaningful human control,” she concluded.
Energized by an opening message from Deputy Minister of Multilateral Affairs for Costa Rica Christian Guillermet-Fernández, over 180 campaigners from across the globe began the Global Meeting by saying goodbye to Mary Wareham after almost nine years of leadership. “Mary, you’ve kept us believing that a legal treaty on autonomous weapons is possible,” expressed Richard Moyes (Managing Director and Co-founder, Article 36), “[further], that a treaty banning killer robots is possible.” Since the start of the Campaign in 2012, Mary recalled, “we’ve gone from zero to hero.” Not only has the Campaign raised the global alarm regarding lethal autonomous weapons, it’s also created a legal debate and made the words “human” and “human control” integral to any discussion regarding these systems.
Drawing a red line: Targeting humans
2021 is a year of new beginnings and opportunities, reflected in the Campaign’s new Vision and Values Statement which signals an evolution in how the Campaign is framing its thinking in relation to killer robots. “Our determination to stop killer robots is part of a broader societal and technological moment,” explained Clare Conboy (Media and Communications Manager, Campaign to Stop Killer Robots). The thoughtful work by Campaigners, especially those of colour over the past year has been integral in our understanding that the social context in which these systems are imagined, created, and deployed can’t be ignored. “The wish by some to create killer robots isn’t separate from the desire to automate what shouldn’t be automated or to begin processing and reducing human beings, in all of our complexities, down to 1s and 0s,” Clare explained. In other words, digital dehumanization is a growing risk that we must address before it’s too late. To effectively prevent digital dehumanization, we must recognize the societal structures that enable many of our social ills and understand how they are intrinsically linked to the issue of killer robots. “Doing work on one requires constant work on the other,” Clare argued.
The following panel discussion, “Drawing a Red Line: Targeting Humans” focused on one key policy element to an international treaty: a prohibition on systems that target humans. “We really want to concentrate on drawing a moral line against systems that sense and reduce people to objects,” explained Elizabeth Minor (Advisor, Article 36). Panelists offered a bird’s eye view of why these systems would cross a moral line. “The fundamental level of how a computer understands the world…it doesn’t see a human being. It doesn’t understand the value of a human life. It doesn’t see us as human,” explained Peter Asaro (Co-founder, International Committee for Robot Arms Control), “So to then empower these systems to take human life is to say we’re comfortable with automated systems making these choices when it doesn’t really understand what it’s doing or who we are…”
Alena Popova (Founder, Ethics & Tech) illustrated how less intelligent and developed systems are already propping up digital dictatorships—but people aren’t backing down. In Russia, she explained, they “collected more than 100 signatures to support a ban on facial recognition.” “Many people who signed that petition are young,” she continued, “this is the major audience which supports our fight against targeting people with algorithms.” Thompson Chengeta (Executive Board Member, Foundation for Responsible Robotics) highlighted how these systems would disproportionately impact certain groups of people. “Accountability gap regarding the violations of the rights of which people?” he asked. He suggested that campaigners integrate critical theories, like critical race theory, into their research and development of an international treaty in order to address and prevent the harms that certain groups will face from these systems. He closed the panel by asking a poignant question: if the officer who killed George Floyd was a machine rather than a white man, would we be having a conversation about race? Probably not, he answered, but that doesn’t mean the racial aspects of the murder wouldn’t exist.
Demanding the Future We Want
Now is the time to deepen our arguments and articulate the treaty we want and the world we want. We must be ready to move swiftly and confidently to convince governments to begin negotiations on a new law banning killer robots. Thankfully, the Campaign’s momentum hasn’t slowed. “Despite the setbacks from COVID, our vision for a new international treaty is very much alive and absolutely achievable,” argued Ousman Noor (Government Relations Manager, Campaign to Stop Killer Robots). We’re excited for the year ahead and to once again work with you: the hundreds of Campaigners spread across the globe that make this fight possible. Let’s get to work!