menu hero image
Laura Nolan - Photo by Clare Conboy

Why tech workers should oppose Killer Robots

Laura Nolan is a computer programmer who resigned from Google over Project Maven. She is now a member of the International Committee for Robot Arms Control (ICRAC) a founding member of the Campaign to Stop Killer Robots. #TechAgainstKillerRobots.

Laura Nolan, technologist and former Google software engineer – Photo by Clare Conboy

There are many ethicalpolitical and legal reasons to oppose autonomous weapons, which can strike without a direct human decision to attack. These are good reasons to worry about autonomous weapons, but I am a software engineer and I also oppose the development and use of autonomous weapons on technological grounds.

All software has bugs. Testing cannot find and eliminate them all. This is a consequence of the fact that software has state, which changes over time. Every additional variable that a program uses multiplies the number of states it can be in (to say nothing of state in the operating system). We kludge this by testing systems from a newly-started, predictable state, but this does not mean that we understand all the ways that a program can behave.

Methodologies do exist for implementing safety critical software systems. However, research on how to build safety-critical autonomous systems is in its infancy, even as the commercial focus on self-driving autonomous vehicles has grown over the past decade. The problem may well be unsolvable.

No computing system has ever been built that cannot be hacked. Even air-gapped systems, which are never connected to the Internet, have been hacked. Remotely operated drones have already been hacked. With autonomous weapons the problem is worse — because there is nobody directly controlling the weapon, attackers may be able to change its behaviour without anyone immediately being aware.

Autonomous weapons are not synonymous with AI (artificial intelligence), but it is highly likely that many autonomous weapons will incorporate AI techniques for target identification, particularly object recognition techniques. Unfortunately, object recognition by computer vision can be fooled and so can lidar based perception.

AIs are unpredictable and tend to fare poorly when human beings try to fool them, or when they are used in environments other than that for which they were trained. Warfare is an arena characterised by deception and constant change in tactics, so AI and decision-making in battle are likely a poor match.

The Robot mascot for the Campaign to Stop Killer Robots

Automation bias is a well-known phenomenon in which human beings tend to believe and favour computer-generated suggestions. Military systems that do incorporate human decision-making may still suffer from this problem — operator overconfidence in the Patriot missile system has been cited as a factor in several friendly-fire incidents. Even automated systems that require a human to give a final OK to an attack can therefore be a problem due to automation bias.

The use of AI in decision-making systems only compounds this problem, due to the fact that modern forms of AI cannot provide any human-understandable reasoning for their decisions. Another new field, explainable AI (or XAI) aims to solve this problem, but it has yet to yield any real progress despite enormous interest. Like the research in safety-critical autonomous systems, it is entirely possible that this is an area that will never bear fruit.

We could build robots that can kill today. We cannot build a safe robot, that can’t be hacked, that works predictably in most or all situations, that is free of errors, and that can reliably manage the complexities involved in international law and the laws of war. That is why a treaty banning their development and use is urgently needed.

If this article resonates with you as a technologist, check out the Campaign to Stop Killer Robots resources for technology workers or join me in using #TechAgainstKillerRobots.


Original Article posted on Medium.com.

Laura Nolan

Image alt text
SKR dots icon

Stop killer robots

Join us

Keep up with the latest developments in the movement to Stop Killer Robots.

Join us