menu hero image

Statement to the Permanent Council of the Organization of American States

Thank you, Chair, and thank you to Costa Rica for their kind invitation to address the Permanent Council.

I speak on behalf of the Stop Killer Robots campaign, an international coalition of over 200 member organisations from more than 70 countries calling for the negotiation of a new, legally binding instrument on autonomous weapons systems. This call for negotiation has enormous support from a wide range of stakeholders, including the UN Secretary General, the ICRC, thousands of experts in technology and AI, faith leaders, civil society and the majority of public opinion.

As we are all no doubt aware, AI and automated decision-making technologies are permeating society in multiple areas.  In the non-military sphere, the risks and challenges accompanying the use of artificial intelligence and algorithmic systems are widely recognised, as is the need for effective regulation. In September 2021, a report from the UN High Commissioner for Human Rights noted that ‘AI technologies can ‘have negative, even catastrophic, effects if deployed without sufficient regard to their impact on human rights’. Numerous notable industry figures have called on governments to strengthen regulation and to create new legislation and regulatory bodies in order to address the serious issues raised by AI and machine-learning technologies. At European Union-level, the European Parliament will soon vote to pass the Artificial Intelligence Act, which will become law by the end of this year, and which will apply to all member countries of the European Union. The AI Act will introduce a number of standards and safeguards for AI technologies in order to protect human rights, including transparency requirements and high-risk AI classifications, and will ban the use of biometric facial recognition systems in public spaces, as well as banning biometric categorisation systems, emotion recognition systems, and predictive policing systems.

Such measures are sorely needed, because serious harms from AI and automated decision-making systems already exist. We have seen significant harm, all around the world, from the use of AI and automated decision-making technologies. To give just two examples, in the Netherlands, an algorithm erroneously flagged families for committing child welfare fraud, targeting migrant families in particular, and leading to severe financial hardship for thousands of people. In the United States and other jurisdictions, people – in particular people of colour –  have been arrested after being wrongly identified by facial and biometric recognition systems. Automated & algorithmic systems engender particular concerns regarding bias, discrimination, inequality and human dignity. Those who will bear the brunt of the impacts of these technologies are those populations which have historically ‘been the most affected and harmed by the emergence of new technologies and weapons systems’.

The extent to which the serious risks and challenges of AI and machine learning are recognised by many states and international bodies and organisations in the civil space should be taken as validation for related concerns around the use of AI in the military space. Autonomous weapons systems are not hypothetical. We are seeing a significant trend towards increasing autonomy in various functions of weapons systems, including in critical functions such as target selection and the application of force. These systems come in many different shapes and sizes. Some are drones or loitering munitions; some are unmanned ground vehicles, such as tanks. Three examples of systems of concern which are already in use include the STM Kargu-2, the Kalashnikov Group’s KUB-BLA, and Elbit System’s LANIUS system. The Kargu-2 is a loitering munition with autonomous flight capabilities and an automatic target recognition system, and in  2021, a UN Panel of Experts reported that the Kargu-2 had been used in Libya, and had been ‘programmed to attack targets without requiring data connectivity between the operator and the munition.’ The KUB-BLA, has reportedly been used by Russia in Ukraine, and is said to have ‘artificial intelligence visual identification (AIVI) technology for real-time recognition and classification of targets. Elbit Systems claim that its LANIUS system can carry out ‘enemy detection’ and ‘threat classification’, differentiating between combatants and non-combatants, or hostile versus friendly individuals.

Purporting to be able to distinguish between combatants and civilians, between active combatants and those hors de combat, or between civilians, civilian persons with disabilities, and civilians directly participating in hostilities on the basis of data acquired by sensors and processed and classified by algorithms raises serious legal, ethical and moral concerns. Autonomous weapons do not ‘see’ us as human beings. Instead, we are ‘sensed’ by the machine as a collection of data points. If this information matches or fits the target profile, the weapons system will apply force. There is no human involved in making this life or death decision. The dehumanisation that results from reducing people to data points based on specific characteristics raises serious questions about how algorithmic target profiles are created, what pre-existing data these target profiles are based on, and the data on which systems were trained. It also raises questions about how the user can understand what falls into a weapon’s target profile and why the weapons system decided to use force. Giving machines the power to make life or death decisions undermines human dignity and denies us of our rights. Instead of being seen as people, people are processed as objects.

As I have already noted, there is widespread recognition that AI and automated decision-making systems present numerous risks and challenges – ethically, legally, and morally – and that specific regulatory responses to these risks and challenges are required. In the military space, the risks and harms arising from such systems, in particular autonomous weapons systems, are severe, involving possible death and injury to individuals, and the deprivation of fundamental human rights. States must use their power to create new, legally binding rules, containing a mixture of prohibitions and regulations for autonomous weapons systems. States should prohibit autonomous weapons systems which cannot be used with meaningful human control, prohibit systems which use sensors to target humans directly, and create positive obligations to ensure meaningful human control in all other systems. This technology is not waiting for us somewhere in the future – it is here and now, and it is time for states to act.

Thank you.

 

 

 

Dr. Catherine Connolly

Image alt text
SKR dots icon

Stop killer robots

Join us

Keep up with the latest developments in the movement to Stop Killer Robots.

Join us