menu hero image
an image of a fleet of military airplanes on the left. On the right an image of a person's hands controlling a drone seen in the distance. The images are separated by a perforated line.

What are the AI Act and the Council of Europe Convention

Potential loopholes in new European legislation mean intrusive AI technologies could be used whenever “national security” grounds are invoked

*This blog was written by Francesca Fanucci, Senior Legal Advisor at the European Center for Not-for-Profit Law (ECNL) , and Catherine Connolly, Stop Killer Robots’ Automated Decision Research manager

The three legislative European Union institutions – i.e., the European Commission, the European Parliament and the Council of the EU – are currently negotiating a proposal to regulate artificial intelligence (AI) in the 27 member states of the European Union. More specifically, the EU Artificial Intelligence Act (AI Act) will establish common rules and obligations for providers and users (that is, deployers) of AI-based systems in the EU internal market. The AIA is expected to be finalised before the end of 2023.

 

At the same time, the Council of Europe (CoE) – a different inter-governmental organisation made up of 46 member states, including the 27 EU member states – is negotiating an international treaty – aka “Framework Convention” – on the development, design and application of AI systems based on the Council of Europe’s standards on human rights, democracy and the rule of law. The Framework Convention will be open to accession of other non-European states as well, turning into a global standard-setting instrument on AI.

 

The two legal instruments – the EU AI Act and CoE Framework – may overlap on some issues but will essentially complement each other, since the former will establish common rules to harmonise the EU internal market of AI systems, whereas the latter will focus on AI systems’ compliance with internationally established human rights, democracy and the rule of law. However, both the EU and the Council of Europe negotiations are considering excluding AI systems designed, developed and used for military purposes, matters of national defence and national security from the scope of their final regulatory frameworks. 

 

As civil society and human rights defenders, we are seriously concerned about these proposed exemptions and their cumulative effect if approved both by the EU and the Council of Europe: if the EU within its 27 Member States does not harmonise rules for AI systems for military purposes, national defence and national security purposes, and nor does the Council of Europe establish binding human rights standards applicable to AI systems for national defence and national security in its 47 Member States and other Parties to the Framework Convention, we will have a huge regulatory gap affecting the design, deployment and use of such systems. To be absolutely clear: we are not affirming that no rules whatsoever would apply to them, as the individual States would retain the power to adopt regulation within their jurisdiction. However, potentially different rules at national level would not be effective, since national defence policies are inevitably interconnected with each other, have an impact beyond the national territory, and must also take into account the actions of big international players like the US, China and the Russian Federation. What we do need is minimum common essential regulatory safeguards, including mandatory human rights impact assessments, for AI-based systems that may be developed and used in those contexts.

 

When we talk about the design, development and use of AI systems in the military sector or for national defence, it might be instinctive for us to immediately think of autonomous weapons, or other AI-enabled weapons systems. But the military sector and national defence sector can use – and in some cases, is already using in practice – other types of AI, for example:

  • Threat recognition devices via mobile cooperative and autonomous sensors, such as e.g., air and ground vehicles that detect threats – such as enemy ships and their predicted behaviour  – via artificial intelligence, (such items have already been developed by the US Army);
  • Devices mapping battlefields in real time, still via mobile cooperative and autonomous sensors, in order to identify targets for attack and leave out civilian areas;
  • Facial recognition tools deployed at borders to detect enemy infiltration (see, e.g., the Ukraine Defence Ministry’s use of Clearview AI to uncover Russian individuals)
  • Job recruitment devices used by national defence agencies, based on AI systems that identify suitable candidates by sorting out CVs, scouring the existing databases of visits/inquiries received in the past and identifying trends (e.g., successful candidates most likely coming from certain ethnic group or educational background, etc.);;
  • AI-based training tools that offer content for preparation and measure progress achieved by military personnel (already used, e.g., in the U.S by the Air Force pilot programs).
  • AI systems added to autonomous vehicles so military personnel can get help with their travel.

 

If we think about it, most of these non-weapon-specific devices, even when specifically researched or designed for use in the military sector, are of “dual-use” nature, because they can be adapted and re-purposed for civilian use as well: e.g., AI-based threat recognition devices are already used to train autonomous driving vehicles to recognise dangers in the street; emotion recognition is already used to test customer reactions to advertised products; AI-based recruitment devices can be used by private companies or even universities to help them select the best candidates; AI-based mapping territories are used in GPS navigation systems for determining optimal routes (e.g., shortest, least busy routes, etc.). Furthermore, their impact on both civilian and military targets can be unnecessary and disproportionate even when it does not result in the loss of life, because it can still compromise their human rights, such as the right to privacy and non-discrimination (e.g., see the infamous case of Amazon’s recruitment tool that ended up favouring male candidates only).

 

The infamous NSO Group’s Pegasus spyware is also a perfect example of technology nominally referred to as “developed or used exclusively for national security and defence purposes”. However, the practice has demonstrated how this technology was also used allegedly for national security or law enforcement purposes (and indeed, the company boasted about having helped authorities discover drug traffic rings and other criminal activities) and was abused even in those circumstances, resulting in human rights violations: notable examples of such violations include the indiscriminate mass surveillance of political dissidents, lawyers, human rights defenders, journalists, academics, senior government officials, civil society representatives etc., whose mobile phones were hacked and their data leaked by their governments that had purchased this technology and re-purposed it for their spying activities.

 

So why should the alleged destination of AI systems to the military, national security or national defence sector exempt them altogether from impact assessments, minimum transparency and accountability obligations, which we expect when the same systems are applied in the civilian sector? 

 

As the European Center for Not-for-Profit Law (ECNL) has previously pointed out with regards to the AIA, ‘Without proper regulatory safeguards, intrusive AI-based technologies – including with mass surveillance outcomes – could be used in the public sector with no special limitations or safeguards whenever “national security” grounds are invoked by a Member State. Even those AI systems presenting “unacceptable” levels of risks and therefore prohibited by the AIA could be easily “resuscitated” or “recycled”forthe exclusive purpose of national security, affecting our freedoms of movement, assembly, expression, participation and privacy, among others.’

 

That exemptions are being sought for AI systems designed, developed and used for military purposes, matters of national defence and national security at both European Union and Council of Europe-level further highlights the wider international regulatory gaps when it comes to AI and its use for military, national security and national defence purposes. As noted above, AI systems will be used for many different military purposes, including in weapons systems with autonomous capabilities. 

 

Regarding the regulation of autonomous weapons, the International Committee of the Red Cross has urged states to negotiate new, legally binding rules on autonomy in weapons systems, containing both prohibitions and regulations. For nearly ten years, UN-level discussions on autonomous weapons systems have been ongoing under the Convention on Certain Conventional Weapons (CCW). The majority of states involved in these discussions support the negotiation of new international rules on autonomous weapons systems; however, a small minority of heavily militarised states continue to block progress.  At EU-level, in both 2021 and 2018, the European Parliament passed resolutions calling for the launch of negotiations for a legally binding instrument on autonomous weapons. The 2022 report of the European Parliament’s Special Committee on Artificial Intelligence in the Digital Age (AIDA) also called for the launch of negotiations, arguing that ‘machines cannot make human-like decisions involving the legal principles of distinction, proportionality and precaution’. Meanwhile, technological advancements in this area continue apace and grow in prevalence, with numerous systems now incorporating different autonomous functions, such as target recognition and threat classification capabilities. The urgent need for new international rules on autonomy in weapons systems could not be clearer.

 

However, discussions on autonomy in weapons systems at the UN and within the EU bodies have to date primarily focussed on the use of such systems in conflict, and in the context of international humanitarian law. It is imperative that states and international bodies and organisations also take human rights into account when considering AI in weapons systems, as well as when considering all other AI systems used for military, national security and national defence purposes. 

 

The boundaries between technologies of war and technologies of state and police power have always been porous. Digital dehumanisation and the well-documented harms from AI systems, which particularly affect marginalised groups and the most vulnerable in our societies, will not be confined to the civilian sector. The EU and the Council of Europe must ensure that: a) there are no blanket exemptions for AI systems designed, developed and used for military purposes, matters of national defence and national security; b) such systems undertake risk and impact assessments before they are deployed and throughout their use. Failure to include such systems in the scope of the AIA and the CoE framework would represent a grievous derogation of responsibility on the part of these institutions. The ECNL calls on all EU and CoE citizens and residents to contact their government and parliament representatives and call on them to encourage their governments to ensure that there are no blanket exemptions, and that risk assessments are required, in the negotiations of the EU AI Act and in those of the CoE Framework Convention on AI. Stop Killer Robots is calling for the urgent negotiation of new international law on autonomy in weapons systems – to show your support please sign our petition here

 

Francesca Fanucci* and Catherine Connolly

Image alt text
SKR dots icon

Stop killer robots

Join us

Keep up with the latest developments in the movement to Stop Killer Robots.

Join us