menu hero image

Military and killer robots

The advent of autonomous weapons is often described as the third revolution in warfare. Gunpowder and nuclear weapons were the first and second.

The deployment and use of gunpowder and nuclear weapons fundamentally changed how conflicts were fought and experienced by combatants and civilians alike.

Advances in technology now allow weapons systems to select and attack targets autonomously using sensor processing. This means that we have less human control over what is happening and why. It means we are closer to machines making decisions over who to kill or what to destroy.

Autonomous weapons would lack the human judgment necessary to evaluate the proportionality of an attack, distinguish civilian from combatant, and abide by other core principles of the laws of war.

Image alt text
The advent of autonomous weapons is often described as the third revolution in warfare. Gunpowder and nuclear weapons were the first and second.

History shows their use would not be limited to certain circumstances. It’s unclear who, if anyone, could be held responsible for unlawful acts caused by an autonomous weapon – the programmer, manufacturer, commander, or machine itself – creating a dangerous accountability gap.

Some types of autonomous weapons will process data and operate at tremendous speeds. Complex, unpredictable and incredibly fast in their functioning, these systems would have the potential to make armed conflicts spiral rapidly out of control, leading to regional and global instability. Killer robots intrinsically lack the capacity to empathise or to understand nuance or context.

That is why Stop Killer Robots is working with military veterans, tech experts, scientists, roboticists, and civil society organisations around the world to ensure meaningful human control over the use of force. We are calling for new international law because laws that ban and regulate weapons create boundaries for governments, militaries and companies between what’s acceptable and what’s unacceptable.

Play video Image alt text

Killer Robots: A former military Officer's perspective

Lode Dewaegheneire has served for more than 30 years as an Officer in the Belgian Air Force. After an operational career as helicopter pilot, he was also a Military Advisor to the Belgian delegation at the United Nations in Geneva. He is now a Military Advisor to Mines Action Canada, a member of Stop Killer Robots.

FAQ's

Some supporters of autonomous weapons have argued that they will be more accurate or precise than humans, and therefore lead to less collateral damage. For example, Ron Arkin has written, “Robots probably will possess a range of sensors better equipped for battlefield observations than humans have, cutting through the fog of war.” Supporters argue that they will bring increased speed and efficiency to the battlefield, that they would be able to operate in environments with insecure communications, and that they could save lives by decreasing the need for human soldiers and acting as a deterrent.

But similar arguments were made for other indiscriminate weapons in the past, like landmines, cluster munitions, and nuclear weapons. Those weapons claimed hundreds of thousands of victims before being banned by international treaties. By reacting with their environment in unexpected ways, autonomous weapons would increase risks to soldiers and civilians alike. Improved precision can be achieved without removing meaningful human control from the use of force. The potential advantages of autonomous weapons are far outweighed by the serious challenges they pose to international law and security.

Another danger emanating from the deployment of autonomous weapons is dependence on wireless communications. Wireless communications are susceptible to intentional disruption such as hacking, ‘jamming’ and ‘spoofing’, which could make systems inoperable or corrupt their programming. In 2012 researchers used a ‘fake’ GPS communications signal to redirect the path of an unmanned air system, successfully spoofing the system and demonstrating concerns with the security of unmanned and autonomous weapons. In a world where cybersecurity and cyberwar raise growing concerns, more sophisticated hacking could enable complete takeover of the operation of autonomous systems, including potential release of weapons.

Autonomous weapons, which would select and engage targets on the basis of sensor data, would give militaries the ability to target people based on race, ethnicity, gender, style of dress, height, age, pattern of behaviour, or any other available data that could make up a targetable group. Technology is not perfect, it’s not neutral, and killer robots would be vulnerable to technical failures. There are also concerns that killer robots would be much cheaper and easier to produce than other weapons of mass destruction. Thousands of scientists have warned that with no costly or hard-to-obtain raw materials it is possible that autonomous weapons could be mass produced. If development is left unregulated, there is risk of these systems being acquired and deployed by non-state actors or individuals along with states.

There are also ethical, moral, technical, legal, and security problems with autonomous weapons. Machines lack inherently human characteristics like compassion and understanding of human rights and dignity, which are necessary to make complex ethical choices and apply the laws of war. In case of a mistake or an unlawful act, autonomous weapons present an accountability gap, which would make it difficult to ensure justice, especially for victims. The nature of war will drastically change as sending machines in place of troops lowers the threshold for conflict.  Autonomous weapons could also be used in other circumstances, such as in border control and policing.

In a military engagement where lethal force is directed or applied, there is a clear chain of command and accountability. Because militaries function as hierarchical organisations, this command structure is top-down, from the commander who orders the use of force to the person who ‘pulls the trigger’. With autonomous weapons, command and control are threatened, and responsibility and accountability are not so clear-cut.

If an autonomous weapon can select and engage its own targets the chain of command is disrupted. In these systems, upon activation there is a period of time where the weapon system can apply force to a target without additional human approval. Even if some parameters are predetermined, due to the dynamic nature of conflict the machine would autonomously engage targets without direct command. This means the human operator does not determine specifically where, when or against what force is applied.

Issues of explainability, predictability, and replicability mean that targeting and engagement by autonomous weapons also threaten military control. There will be little or no clarity on why or how a killer robot made a specific decision. If those decisions result in errors – like friendly fire or excessive collateral damage – it will be difficult to determine whether it was a result of the machines functioning or adversarial tampering. When command and control are disrupted in this way, the accountability gap also widens. Who is responsible for unlawful acts by autonomous weapons? The people who set the variables in its utility function? The people who programmed it in the first place? The military commander? Who will be held accountable?

Some have argued that control can be maintained through appropriate oversight, or ability to intervene or cancel an attack. However, there are serious concerns over whether human operators would be able to maintain the necessary situational understanding to have meaningful control. The amount of data a human commander would have to review would outstrip human ability to analyse it. The inability to interpret the huge metadata sets will further distance humans from understanding what goes on the battlefield. The speed and reaction of machines compared to humans will increase the pace of war, and the result will be a loss of meaningful human control over the use of force.

One of the main principles of International Humanitarian Law (IHL) is distinction – the requirement to distinguish between combatants and civilians. But in the recent decades, conflicts have increasingly been non-international armed conflicts fought between state forces and non-state actors like guerillas or insurgents. Enemy combatants in this type of warfare are rarely seen wearing standard military uniforms, making it harder to distinguish them from civilians. In this type of warfare, it is often the goal of non-state actors to blend in or appear as civilians in order to gain tactical advantage. Given the difficulties human soldiers face in determining who is a legitimate target, it is easy to see even greater risk posed by the use of autonomous weapons.

A machine would determine if the target was a combatant based purely on programming, likely developed in a sterile laboratory years before the decision to kill was made. Abrogating life and death decisions to a machine is morally, ethically and legally flawed.

John MacBride, LCol (Retd), in open letter on military personnel’s call for a ban on autonomous weapons.

Diplomatic talks concerning autonomy in weapons systems are entering a critical stage, though talks at the United Nations in the Convention on Conventional Weapons have made little progress since 2014. A handful of military powers are stubbornly resisting proposals to launch negotiations on a legally binding instrument addressing autonomy in weapons systems. Meanwhile, military investments in artificial intelligence and emerging technologies continue unabated. If left unchecked, this could result in the further dehumanisation of warfare, and diminished public trust in the many promising and beneficial civilian applications of emerging technologies.

Read more

Stop Killer Robots is not seeking to ban weapons that operate under meaningful human control. We are not opposed to artificial intelligence (AI) or robotics broadly, or even to the use of AI or robotics by the military. We are not proposing a ban on systems without weaponry designed to save lives, such as autonomous explosive ordnance disposal systems, which may operate with or without human control. But we believe there is a line which should never be crossed: life and death decision making should not be delegated to machines.

The development or deployment of autonomous weapons that target people and cannot or don’t operate under meaningful human control will lower the threshold for entry into armed conflict, and any such system that is deployed is vulnerable to hacking, or malfunction, increasing risk to friendly troops and civilians alike. It is our understanding that no military commander would want to cede control on the battlefield to an autonomous weapon.

Technologies are designed and created by people. We have a responsibility to establish boundaries between what is acceptable and what is unacceptable. We have the capacity to do this, to ensure that artificial intelligence and other emerging technologies contribute to the protection of humanity – not its destruction.

At the height of the Cold War, a Russian officer saved the world. On 26 September 1983, Lieutenant Officer Stanislav Petrov decided not to accept that the computer signals alerting him to an imminent attack from US’s nuclear warheads were accurate. What would have happened if he would have hit the ‘I-believe button’ to rubberstamp the system’s recommendation? Ensuring that weapons systems operate under meaningful human control means that life and death decisions are not delegated to machines. The importance of human decision making in a military context is as important now as it was during the Cold War

What can you do?

John MacBride, LCol (Retd) has written an open letter, with the objective of gathering evidence of support for a ban on the development, use and deployment of autonomous weapons by veterans and serving members of the military.

If you feel concerned about the impending third revolution in warfare and what this will mean for the chain of command, order, accountability, and safety for members of the military and civilians around the world – please add your voice to our call. Your support is invaluable and with your help we can achieve a legal response to the problems posed by autonomous weapons.


Fully autonomous weapons are weapon systems that can identify and fire on targets without a human controlling them. They are not armed drones that have human control but are machines that would decide whether or not to kill without human intervention. That decision to kill would not be the result of the skills, knowledge, intelligence, training, experience, humanity, morality, situational awareness, and understanding of the laws of war and international humanitarian law that men and women in uniform use to make such decisions in battle. A machine would determine if the target was a combatant based purely on programming likely developed in a sterile laboratory years before the decision to kill was made. Abrogating life and death decisions to a machine is morally, ethically and legally flawed.

No country has fielded fully autonomous weapons yet but they are under development in a number of countries. Now is the time to put a stop to their development and ultimate deployment. Some argue that these weapons are necessary and inevitable. Among them, the argument that they would improve the survivability of servicemen and servicewomen and that might be the case if the enemy did not have similar weapons, but if one side has them so does another. We are told that machines do not have human frailties, they do not get tired, they do not get angry, they are not affected by weather or darkness to the extent that people are, they know no fear and that makes these machines superior to a soldier. Machines do not have these frailties, nor are they responsible or accountable for their decisions – they could and would attack with impunity. We believe the characteristics of these weapons should be banned in accordance with existing International Humanitarian Law.

Technological advances in robotics are already assisting soldiers in such areas as the detection of explosive devices, search and rescue, and some engineering tasks. However, many in uniform both serving and retired are seriously concerned about the prospect of assigning decisions on whether, what and when to kill, to machines. Autonomous weapons are not accountable for their actions. There is a great deal of concern, particularly when considering asymmetric warfare, that machines are capable of reliably discriminating between targets that might be engaged legally and those which are not legal. As Soldiers, Sailors, Airmen and Airwomen, both serving and retired, we join the call for a ban on the development, deployment and use of weapon systems in which the decision to apply
violent force is made autonomously.

1

Will you sign?

Your support is invaluable and with your help the development, production, and use of fully autonomous weapons can be prevented.

Play video Image alt text

Killer Robots Will Fight Our Wars: Can They Be Trusted?

Paul Scharre is the Director of the Technology and National Security Program at the Center for a New American Security. He suggests that answering questions, such as whether or not a robot could make morally sound decisions, will help us find a humane way to move forward with the advancement of autonomous weapons. The issue with completely taking humans out of the loop, and giving killer robots free rein, boils down to the importance of our humanity.
Play video Image alt text

The Dawn of Killer Robots

In INHUMAN KIND, Motherboard gains exclusive access to a small fleet of US Army bomb disposal robots—the same platforms the military has weaponized—and to a pair of DARPA’s six-foot-tall bipedal humanoid robots. We also meet Nobel Peace Prize winner Jody Williams, renowned physicist Max Tegmark, and others who grapple with the specter of artificial intelligence, killer robots, and a technological precedent forged in the atomic age. It’s a story about the evolving relationship between humans and robots, and what AI in machines bodes for the future of war and the human race.
Play video Image alt text

The kill decision shouldn't belong to a robot

As a novelist, Daniel Suarez spins dystopian tales of the future. But on the TEDGlobal stage, he talks us through a real-life scenario we all need to know more about: the rise of autonomous robotic weapons of war. Advanced drones, automated weapons and AI-powered intelligence-gathering tools, he suggests, could take the decision to make war out of the hands of humans.

3

Podcast: Control of systems in time & space

This episode explores how time and space should be considered as means for maintaining meaningful human control, answering key questions: 1) What systems are considered unacceptable that would need to be prohibited? 2) How is meaningful human control ensured over the systems that do not need to be prohibited?

Image alt text
SKR dots icon

Stop killer robots

Join us

Keep up with the latest developments in the movement to Stop Killer Robots.

Join us