Youth campaigners in Latin America share why other youth should care about killer robots.
Killer robots might seem like science fiction, but unfortunately, they are the next big step in digital dehumanisation; in cementing the biases and inequalities of previous generations. But they can be prevented.
Unless we act, future generations will have to live with the consequences of decisions that are being made right now – the development of new technologies with algorithmic bias, climate change, unjust conflicts, social and political inequity and unrest… The world we live in can be complex but often we have more power to influence events than we think.
There are many global and social issues that we will have to face in our lifetimes.
Political leaders should not be investing in autonomous weapons now, when young people will have to confront the reality and consequences of their decisions in the future. Just like with climate change, by stopping the development of killer robots we have the opportunity to bring new hope; to be agents of change with power and influence.
We’ve grown up in a digital age and we are the future. The coders, programmers, engineers, soldiers, diplomats, politicians, activists, organisers, and creatives who will have to deal with the realities of killer robots.
We have the power to shape the future of the tech, of industry, of politics. To invest in our communities and humanity more broadly. Each and every one of us has the opportunity to carry the message that killing shouldn’t be delegated to machines, and that each human life has value.
There are ethical, moral, technical, legal, and security problems with autonomous weapons. Machines lack inherently human characteristics like compassion that are necessary to make complex ethical choices. Technology is not perfect, it’s not neutral, and killer robots would be vulnerable to technical failures like hacking. In case of a mistake or an unlawful act, who would be accountable? The programmer, the manufacturer, military commander or the machine itself? The accountability gap would make it difficult to ensure justice, especially for victims. The nature of war will drastically change as sending machines in place of troops would remove most obstacles of today’s conflicts. Autonomous weapons could also be used in other circumstances, such as in border control and policing. They could also be used to suppress protests and prop-up regimes.
If used, fully autonomous weapons will fundamentally shift the nature of how wars are fought. They would lead to more asymmetric war, and destabilize international peace and security by sparking a new arms race. They would also shift the burden of conflict further onto civilians. But, the risks of killer robots don’t only threaten people in conflict. The use of these weapons within our societies more broadly could also have serious consequences. Think about future protests, border control, policing, and surveillance. Or even about other types of technologies we use. What would it say about our society – and what impact would it have on the fight for ethical tech – if we let ultimate life and death decisions be made by machines? Really, the emergence and consequences of autonomous weapons would affect us all.
Some people say killer robots would be more accurate – that they would be quicker and more efficient than human soliders, could go into places that are difficult for soldiers to operate in, could be more precise in targeting, save lives by reducing “boots on the ground”, and act as a deterrent. But similar things were said about landmines, cluster munitions, and nuclear weapons – indiscriminate weapons that killed and injured hundreds of thousands of people before being banned.The reality is, killer robots would lack empathy and human judgement. They would lower the threshold to violence and conflict, and be tools for persecution. This is a scary prospect, and we should make sure it doesn’t become a widespread reality. Accuracy can be achieved without removing meaningful human control from the use of force. The threats and risks of killer robots far outweigh any potential advantages.
Around the world momentum continues to build behind the call for limits on autonomy in weapon systems through a new international treaty. Killer robots are regarded as a major threat to humanity that requires a swift and strong multilateral response.
Over 185 NGOs support the movement to stop killer robots. Our calls for a treaty are shared by technical experts, world leaders, international institutions, parliamentary bodies and political champions. Nearly 100 states have acknowledged the importance of human control over the use of force. Hundreds of tech companies have pledged to never participate in nor support the development, production, or use of autonomous weapon systems. Thousands of artificial intelligence and robotics experts have warned against these weapons and called on the United Nations to take action. There is also clear public concern. In IPSOS surveys released in 2019 and 2020 more than three in every five people stated their opposition to the development of weapons systems that would select and attack targets without human intervention.
UN Secretary-General Guterres has called autonomous weapons “morally repugnant and politically unacceptable”, and has made multiple statements since 2018 urging states to negotiate a treaty. And the International Committee of the Red Cross said that new law is needed to address autonomy in weapons and has called for a treaty combining prohibitions and regulations. The European Parliament, Human Rights Council rapporteurs, 26 Nobel Peace Laureates and even tech moguls like Tesla’s Elon Musk, Google’s Demis Hassabis, and Apple’s Steve Wozniak, have endorsed calls to ban autonomous weapons.
There are concerns that tech companies, especially those working on military contracts, don’t have policies to make sure their work isn’t contributing to the development of autonomous weapons. A 2019 report from PAX found that Microsoft and Amazon are named among the world’s ‘highest risk’ tech companies that might be putting the world at risk through killer robot development. In 2018, thousands of employees protested Google’s contract with the Pentagon on an initiative called Project Maven. Tech worker action resulted in Google not renewing Project Maven and releasing a set of principles to guide its work in relation to artificial intelligence. In those ethical AI principles, Google committed not to “design or deploy artificial intelligence for use in weapons”.
Tech should be used to make the world a better place, and tech companies like Amazon, Google, Microsoft, Facebook, and others should commit publicly not to contribute to the development of autonomous weapons. Tech workers, roboticists, engineers, and researchers know this – which is why thousands of them have signed an open letters and pledges calling for new international law to address autonomy in weapons and ensure meaningful human control over the use of force.
It’s possible. Many universities have research institutions working on artificial intelligence and machine learning. If you want to be certain, you can check whether your university has an ethical positioning or clear statement of their position on killer robots. Or if they have contracts with defence ministries or private companies that contracted them to develop specific technologies. It is crucial for universities to be aware of how the technology they develop could be used in the future. The PAX report “Conflicted Intelligence” warns of the dangers of university AI research and partnerships, and outlines how universities can help prevent the development of autonomous weapons.
If it looks like your university is developing technologies related to killer robots, don’t panic! There is a way to act. In 2018, the Korean Advanced Institute of Science and Technology (KAIST) announced a collaboration with arms producer Hanwha Systems. The goal was to “co-develop artificial intelligence technologies to be applied to military weapons, joining the global competition to develop autonomous arms”. The announcement led to a boycott by professors and students worldwide, and this eventually pushed the university to make public reassurances that it would not develop killer robots. It implemented a policy that states “AI in any events should not injure people”. Hope comes from action. For more ideas of how to keep your university from developing autonomous weapons, check out the brilliant PAX universities Action Booklet.
If you’re reading this then you are already contributing to the movement. Follow us on social media and join our Youth Network. Teach the people around you – your friends, family, school – about killer robots. Share your thoughts and opinions on social media. Awareness is the first step to change, and the more the message spreads the more likely we will be able to push forward with more momentum.
Youth campaigners in Latin America share why other youth should care about killer robots.
Our Hungarian Youth Activist Coordinator Illés Katona explains why youth should take action to stop killer robots on International Youth Day 2020.
Marta Kosmyna on how great inventions can have unintended consequences. So how do we protect our robots from going to the dark side? As students, you are the changemakers of today and the leaders of tomorrow. Let's talk about the role of robotics in law enforcement and warfare, and how to build robots we can be proud of.
Keep up with the latest developments in the movement to Stop Killer Robots.Join us