menu hero image

Problems with autonomous weapons

We didn’t expect a campaign to Stop Killer Robots to be needed in the world - but it is.

Autonomy in weapons systems is a profoundly human problem. Killer robots change the relationship between people and technology by handing over life and death decision-making to machines. They challenge human control over the use of force, and where they target people, they dehumanise us – reducing us to data points.

But, technologies are designed and created by people. We have a responsibility to establish boundaries between what is acceptable and what is unacceptable. We have the capacity to do this, to protect our humanity and ensure that the society we live in, that we continue to build, is one in which human life is valued – not quantified.

Image alt text

Governments and companies are rapidly developing weapons systems with increasing autonomy using new technology and artificial intelligence. These ‘killer robots’ could be used in conflict zones, by police forces and in border control. But a machine should not be allowed to make a decision over life and death.

UN experts have reported to the Security Council on the recent use of “lethal autonomous weapons” in conflict in Libya. The use of these munitions, with no specific limits on how they function or how they are used, shows that the need for new law is urgent.

 

Nine problems with killer robots 

(and one solution)

 

1. Digital Dehumanisation

Technology should empower all members of society, not reduce us – to stereotypes, labels, objects. Used against people, the technologies that enable autonomous weapons will automatically profile, pattern match and process human beings as data. The truth is, machines cannot recognise people as ‘people’. So machines deciding whether or not to subject us to attack is the ultimate form of digital dehumanisation.

If we allow this dehumanisation we will struggle to protect ourselves from machine decision-making in other areas of our lives. We need to prohibit autonomous weapons systems that would be used against people, to prevent this slide to digital dehumanisation.

2. Algorithmic biases

Allowing autonomous systems that target people would mean allowing systems to reinforce or exacerbate existing structures of inequality. The prejudices in our society live in our data-sets, our categories, our labels and our algorithms. Killing people based on pre-programmed labels and identities will always pull us towards reinforcing prejudices or structures of oppression. Problematic new technologies are also often tested and used on marginalised communities first. We should be challenging structures of inequality, not embedding them into weapons.

Image alt text

3. Loss of meaningful human control

Losing meaningful human control means that the users of weapons are no longer fully engaged with the consequences of their actions. And this means less space for ‘humanity’. Whether on the battlefield or at a protest, machines cannot make complex ethical choices, they cannot comprehend the value of human life. Machines don’t understand context or consequences: understanding is a human capability – and without that understanding we lose moral responsibility and we undermine existing legal rules.

Ensuring meaningful human control means understanding the technologies we use, understanding where we are using them, and being fully engaged with the consequences of our actions.

4. Lack of human judgement and understanding 

People cannot make meaningful judgements if they don’t understand the systems they are using or the contexts they are using them in.

Autonomous systems are becoming more complex. Forms of artificial intelligence and machine learning can present barriers to understanding and predictability. Technologies that change their own behaviour or adapt their own programming independently can’t be used with real control. Other technologies can present a ‘black box’, where it is not possible to know why or how decisions are made. This can produce systems that are very effective at completing certain tasks – but their use isn’t appropriate where any unexpected decision can mean life or death. We need to ensure that systems are sufficiently explainable – and to prohibit systems that cannot be used with meaningful human control.

Even simple autonomous systems present challenges. Under the law, military commanders must be able to judge the necessity and proportionality of an attack and to distinguish between civilians and legitimate military targets. This means not just understanding a weapon system, but also understanding the context in which it might be used. Over a wider area, or a longer period of time, ‘context’ becomes more complex – the situation becomes more and more unpredictable. New legal rules are needed to limit that unpredictability and to ensure meaningful human control.

Image alt text

5. Lack of accountability

People, not machines, must be held accountable. But if people are not making meaningful decisions, then they cannot properly be considered responsible for the consequences of their actions. It would be unjust to make a person liable for the actions of an autonomous weapon system operating beyond their effective control. If we are committed to accountability, then we need rules that ensure that the right people are taking responsibility in the use of force.

6. Inability to explain what happened or why

These problems of control and accountability risk leaving any people that are harmed with nowhere to turn. If we can’t explain how outcomes occurred, then we are leaving victims in the dark – with no explanation and no accountability. People who lose family members and loved ones in conflict rarely get justice for their losses. We should be working to correct that – not programming this disregard through systems that we cannot explain and or control.

7. Lowering the threshold to war

The challenges posed by existing methods of remote war would also be amplified through increased autonomy, with humans becoming further detached from the use of force. Existing armed drones have been used by states to apply lethal force in situations where they wouldn’t have before – bringing the fear and horror of conflict to places away from active battlefields.

It is understandable that all states want to reduce the risks of conflict to their own troops. But while replacing people with machines may make military action more politically acceptable at ‘home’, it can make conflict easier to enter into. It also shifts the burden of harm still further onto civilian populations.

Image alt text

8. A destabilising arms race

Large military powers are using political tensions and international power rivalries as the justification for investing in technologies that reduce human control. We have heard it argued from both sides: We need more autonomy in case our adversaries have more autonomy’. Weapons manufacturers are eager to encourage that rhetoric in order to boost profits. These dynamics may benefit some, but they are bad for the rest of us – wasting money on unnecessary military expenditures, building tensions and increasing the risk of conflict (whether deliberate or from an accidental autonomous response). We need more humanity in our international relations, not more dehumanisation.

9. Our relationship with technology

Technology can and should be developed to promote peace, justice, human rights and equality. We all need to take responsibility for the development and use of technology, and for the role it plays in our lives. Deciding to draw a line against machines that make decisions to kill people – drawing a line against technologies that apply force without real human control – provides a fundamental test for our relationship with AI and new technologies across all areas of society. If we don’t draw these lines now it will only get more difficult as states and commercial companies become more and more invested in development. Rejecting digital dehumanisation and ensuring meaningful human control over the use of force are key steps to building a more empowering relationship with technology for all people now and in the future.

One clear solution

We are calling for new international law because laws that ban and regulate weapons create boundaries for governments and companies between what’s acceptable and what’s unacceptable. We can drive this forward with increased momentum by using our collective voice and publicly demanding change.

We can stop killer robots

 

Take action

Let's go!
Image alt text
SKR dots icon

Stop killer robots

Join us

Keep up with the latest developments in the movement to Stop Killer Robots.

Join us