menu hero image

Frequently asked questions

Answers to your questions about autonomous weapons and Stop Killer Robots' work.

Increasingly we are seeing weapons systems with autonomous functions, and AI decision support and target recommendation systems, being developed and used in ongoing conflicts, including in Gaza and Ukraine. These conflicts are being used as testbeds for technologies with increasing levels of autonomy. While there are not yet verified reports of the use of weapons systems which use sensor processing to choose a target and to decide where and when an attack will happen without human approval, precursor systems are being deployed by numerous states.

This is extremely concerning in the absence of clear rules and limits that specifically apply to these weapons and other systems. Especially as developers and users are pushing the limits of what is acceptable under legal and ethical norms.

Stop Killer Robots is calling for new international law (i.e. a treaty) which would involve a combination of regulations and prohibitions on autonomous weapons systems. While trends show an increase in the use of autonomy in warfare, it is important to make distinctions between what is and what is not an autonomous weapon. This allows us to lobby effectively for new international law while highlighting the overlapping concerns between autonomous weapons, weapons systems with autonomous functions, and other military systems.

In the sections below we explain the difference between these technologies, outline our policy position and provide additional background on our campaign.

 

Killer robots

Autonomous weapon systems are weapons that detect and apply force to a target based on sensor inputs, rather than direct human inputs. After the human user activates the weapons system, there is a period of time where the weapons system can apply force to a target without direct human approval. This means what or who will be attacked, and where and when that attack happens, are determined by sensor processing, not by humans. This sensor-based targeting is a function of a weapons system. So, a weapons system is functioning autonomously when sensor processing can automatically trigger an application of force. Watch our ‘Autonomous weapons explained’ resource to better understand how autonomous weapons systems work.

Killer robots are those autonomous weapons systems that cannot be used in line with legal and ethical norms and should be prohibited. These are systems which cannot be used with meaningful human control, and systems which target humans. 

Killer robots raise significant moral, ethical and legal problems, challenging human control over the use of force and handing over life and death decision making to machines. They also raise concerns around digital dehumanisation, the process by which people are reduced to data points that are then used to make decisions and/or take actions that negatively affect their lives. 

Systems that function in this way threaten international peace, security and stability. Autonomy in weapons systems diminishes the control of the human operator and undermines accountability and responsibility in conflict. These weapons systems also raise serious concerns over compliance with international human rights law and the international humanitarian law principles of distinction, proportionality, precaution, and the prohibition of indiscriminate attacks.

If used, autonomous weapons will fundamentally shift the nature of how wars are fought. They would lead to more asymmetric war, and destabilise international peace and security by sparking a new arms race. They would also shift the burden of conflict further onto civilians. But, the risks of killer robots don’t only threaten people in conflict. The use of these weapons within our societies more broadly could also have serious consequences.

Think about future protests, border control, policing, and surveillance, or even about other types of technologies we use. What would it say about our society – and what impact would it have on the fight for ethical tech – if we let ultimate life and death decisions be made by machines? The emergence and consequences of autonomous weapons affects us all.

Some people say killer robots would be more accurate – that they would be quicker and more efficient than human soldiers, could go into places that are difficult for soldiers to operate in, could be more precise in targeting, save lives by reducing “boots on the ground”, and act as a deterrent. But similar things were said about landmines, cluster munitions, and nuclear weapons – indiscriminate weapons that killed and injured hundreds of thousands of people before being banned.

Technologies that change their own behaviour or adapt their own programming independently can’t be used with real control. Other technologies can present a ‘black box’, where it is not possible to know why or how decisions are made. Under the law, military commanders must be able to judge the necessity and proportionality of an attack and to distinguish between civilians and legitimate military targets.

This means not just understanding a weapon system, but also understanding the context in which it might be used. Machines don’t understand context or consequences: understanding is a human capability – and without that understanding, we lose moral responsibility and we undermine existing legal rules. The threats and risks of killer robots far outweigh any potential advantages.

Many systems incorporate different automated functionalities (such as automatic target recognition), but do not apply force to (attack) targets based on sensor inputs. Most weapons systems that use automatic target recognition, for example, currently have a human operator that needs to approve any attack (we call these approval based systems). However, it is difficult to verify what level of autonomy a system has over which function. For more, see this report by PAX on the legal and moral implications of trends in autonomy in weapons systems.

Such systems also raise moral, legal and ethical concerns, particularly around digital dehumanisation, and whether the user is able to make a well-founded judgement as to whether the effects of an attack are in line with legal and ethical norms. Our research and monitoring team, Automated Decision Research, has created a Weapons Monitor resource with  examples of weapons systems with features that are informative to concerns around increasing autonomy in weapons.

Uncertainties around the capabilities of weapons systems and their autonomous functions (or lack thereof), and a trend towards increasing autonomy in warfare including in current conflicts, further highlight why new international law on autonomy in weapons systems is urgently needed.

AI target recommendation or target-generation systems are not weapons systems. They are data processing systems which process and analyse large amounts of data at a high speed to generate targets and make targeting recommendations. The data used can include satellite imagery, telecommunications, drone footage, etc. Examples of such systems include Israel’s Habsora (aka ‘Gospel’) system, Israel’s Lavender system, and the United States’ Project Maven

These systems are target recommendation or ‘decision support’ systems; they are not weapons. As such, they do not fall under the purview of a treaty on autonomous weapons systems. 

While they are not weapons systems, these systems still raise significant humanitarian, ethical, moral and legal issues. There are serious risks that humans would over-trust the recommendations of the system (automation bias) and may become over reliant on automated systems without an understanding as to how the recommendations were generated. Also, the desire for increased speed can limit the users’ cognitive engagement with the decision making. These systems can also create an accountability ‘smoke screen’ regarding who would be responsible if the recommendations lead to violations of IHL.

For more on the problems raised by such target recommendation systems, read Lucy Suchman’s piece on ‘the algorithmically accelerated killing machine’, and Elke Schwarz and Neil Renic’s piece, ‘Inhuman-on-the-loop: AI-targeting and erosion of moral restraint’.

‘Fire control’ systems, which involve a number of components working together to assist a ranged weapon system to target, track and strike, such as SmartShooter’s SMASH systems and AimLock’s systems,  can be integrated with assault rifles and other weapons. These systems may use algorithms (specifically image recognition) to detect, classify or track targets, ‘locking on’ to a target chosen by a human operator, and firing at the command of a human operator. As there is a human making the decision to deploy force (attack) to a target these are not autonomous weapons. While such systems are not autonomous weapons systems, these systems also raise moral, ethical and legal issues, and concerns around digital dehumanisation.

There are many types of robot which can be used for other military purposes, such as robots for bomb disposal, for transport of military equipment, for mine clearance, for search and rescue, for surveillance purposes, and so on.

Some robots, such as remote-controlled robot ‘dogs’, have received significant attention in recent years, particularly due to worries around their potential weaponization and use by militaries and by police forces. As with all weapons systems, these robots should only be used in line with legal and ethical norms.

 The increasing use of such robots for military purposes and uncertainties as to their capabilities, the extent of their autonomous functions, and their potential weaponization further highlight why new international law on autonomy in weapons systems is urgently needed. 

Similarly, the use of such robots in non-conflict contexts, such as policing and border control, raises questions for the protection of human rights and the application of international human rights law.

While six robotics companies have previously pledged not to allow for the weaponization of their robots, the continued development of robotic systems for military use demonstrates that self-regulation will not be sufficient to deal with the challenges posed to the protection of civilians and international humanitarian law by emerging technologies.

Our campaign

Stop Killer Robots is a global coalition of more than 250 international, regional, and national non-governmental organisations and academic partners working across 70+ countries to ensure meaningful human control over the use of force through the development of new international law. We also have a Youth Network which brings together young leaders from around the world to collaborate and support our efforts to secure a future without automated killing.

The Stop Killer Robots campaign calls for new international law on autonomous weapons systems. This new treaty should prohibit autonomous weapons systems which target humans directly, and autonomous weapons systems which cannot be used with meaningful human control (these are what we call killer robots). The treaty should regulate all other autonomous weapons systems through the creation of positive obligations (things that states have to do to ensure compliance, or measures they have to take to prevent international law violations), including specific rules on predictability, understandability, and temporal and spatial limitations.  The International Committee of the Red Cross  also recommends that systems designed or used to target humans and unpredictable autonomous weapon systems should be expressly prohibited, and that all other autonomous weapons systems should be subject to positive obligations.

Stop Killer Robots, like the International Committee of the Red Cross, is calling for new legally binding international rules to both prohibit and regulate autonomous weapons systems, rather than a full ‘ban’. This is because new international law in the form of a mixture of prohibitions and regulations would be broader in scope than a ban, can capture more issues of concern, and would create the strongest possible international treaty. Beside only addressing the prohibition of fundamentally unacceptable autonomous weapons, a treaty should include rules to ensure other weapons with autonomy are used in line with legal and ethical norms.

New international rules in the form of a legally binding instrument would create a durable and effective framework for the prohibition and regulation of the development and use of autonomous weapons systems. An instrument with a broad scope, a logical structure and with clear normative lines – like the prohibition on targeting people – will set a compelling standard even for states that do not join it at first. An instrument structured along these lines will shape the development of technologies for the future.

Around the world momentum continues to build behind the call for limits on autonomy in weapon systems through a new international treaty. Killer robots are regarded as a major threat to humanity that requires a swift and strong multilateral response.  Most recently, in November 2023, the first-ever UN resolution on killer robots was adopted at the United Nations General Assembly. This followed a series of regional positions (i.e. declarations and communiqués) adopted on killer robots earlier in 2023. Currently, a majority of states support the negotiation of new rules for autonomous weapons systems, as do the UN Secretary General and the President of the International Committee of the Red Cross, who made a landmark joint appeal in October 2023 calling for states to negotiate a new legally binding instrument by 2026.

Image alt text
SKR dots icon

Stop killer robots

Join us

Keep up with the latest developments in the movement to Stop Killer Robots.

Join us