Facial Recognition-Related Provisions of the EU’s Draft AI Regulation, part 1

By Theodore Christakis and Mathias Becuywe

Posted on May 12, 2021


Share

This post is the first section of “Pre-Market Requirements, Prior Authorisation and Lex Specialis: Novelties and Logic in the Facial Recognition-Related Provisions of the Draft AI Regulation.” It is republished on TAP by permission of its authors, Théodore Christakis and Mathias Becuywe, and the European Law Blog.

 

The draft Artificial Intelligence Regulation proposed by the European Commission on 21 April 2021 was eagerly anticipated. Its provisions on facial recognition to an even greater degree, given the heated debate going on in the background between those who support a general ban of this technology in public spaces and those who consider that it has “a lot to offer as a tool for enhancing public securityprovided that rigorous red lines, safeguards and standards are introduced. NGOs (such as those who support the “Reclaim Your Face” campaign) and political groups (such as the Greens) have been calling for a total ban of “biometric mass surveillance systems in public spaces”. Contrary to these calls, in their submissions to the public consultation on the White paper, some countries (e.g. France, Finland, the Czech Republic and Denmark) claimed that the use of facial recognition in public spaces is justified for important public security reasons provided that strict legal conditions and safeguards are met (see the Impact Assessment Study, at 18). The results of the public consultation on the White Paper on AI are mixed on the issue of the ban (see here, at 11), but an overwhelming majority of respondents are clearly calling for new rules in this field.

 

Whilst the idea of a complete ban has been rejected (as we will discuss later in this paper), leading to reactions of the European Data Protection Supervisor (EDPS) and NGOs, the Commission’s draft Regulation attempts to deliver on the idea of introducing new rules for what it calls “remote biometric identification” (RBI)[i] systems, which include both facial recognition but also other systems for processing biometric data for identification purposes such as gait or voice recognition.

 

The objective of this paper is to present the basic features of this proposed set of rules; to decipher the “novelties” among these when compared with existing rules related to the processing of biometric data, especially Article 9 of the General Data Protection Regulation (GDPR) and Article 10 of the Law Enforcement Directive (LED); and to explain the logic behind the new mechanisms and constraints that have been introduced. Part 1 of this paper includes a table that we have produced in order to enable an understanding of the facial-recognition-related provisions of the draft AI Regulation “at a glance”. Part 2 focuses on the rules proposed in the draft to regulate the use of RBI in publicly accessible spaces for the purpose of law enforcement.

 

The analysis below is based on certain highlights of a first high level discussion on this topic organised on April 26, 2021 by the Chair on the Legal and Regulatory Implications of Artificial Intelligence (MIAI@Grenoble Alpes), with the cooperation of Microsoft. The workshop, which was held under Chatham House rules, included representatives of three different directorates-general of the European Commission (DG-Connect, DG-Just and DG-Home), the UK Surveillance Camera Commissioner, members of the EU Agency for Fundamental Rights (FRA) and Data Protection Authorities (CNIL), members of Europol and police departments in Europe, members of the European and the French Parliaments, representatives of civil society and business organisations, and several academics. A detailed report of this workshop and a list of attendees will be published in the coming days on AI-Regulation.Com, where we have already also posted the materials distributed during this workshop that could be very useful for the readers of this blog.

 

I. Table: RBI Rules, At A Glance

 

In order to present the Commission’s proposal in a structured and more accessible way, we have produced the following table giving a visual overview of the basic RBI rules and mechanisms in the draft AI Regulation.

 
Table displaying overview of the basic remote biometric identification rules and mechanisms in the draft AI Regulation.

Visual Overview of the Basic RBI Rules and Mechanisms in the draft AI Regulation Note: RBI refers to “remote biometric identification.”

 

 

The table is divided into two parts (indicated by the blue dotted line) to represent the distinction made by the draft Regulation between obligations for “providers” of RBI systems (i.e., any natural or legal person who develops an RBI system in order to place it on the market and make it available for use); and “users” (i.e., any person or authority that deploys or uses an RBI system which is already available on the market).

 

1) The Upper Section: Important Pre-Market Requirements for RBI Developers and Providers

 

When one focuses on the upper section, it is immediately apparent that the draft Regulation proposes some remarkable novelties in relation to the obligations and pre-market requirements for providers that develop RBI systems.

 

Firstly, these new obligations concern all RBI systems, not only “real-time” RBI systems[ii], the regulation of the use of which by law enforcement authorities (LEAs) is shown in the lower section of the table. This is very important because it means that these pre-market obligations will also cover “post” RBI systems[iii], used for instance by LEAs to aid in the identification of a person who has committed a crime, using photos or video stills. Such identification/forensic methods are already used for instance in France in accordance with Article R 40-26 (3) of the French Code of Criminal Procedure. In 2019, a person who was identified, using such a system, after committing a burglary in Lyon, tried, unsuccessfully, to challenge the use and reliability of such post-RBI systems (the Court followed the prosecutor who explained that the facial recognition system was just one of the several tools used by LEAs during the investigation) . The Commission suggests that henceforth the development of post-RBI systems should be subject to exactly the same kind of strong pre-market requirements as those that concern “real-time” RBI.

 

Secondly, these RBI systems, in common with all other “high-risk AI systems” (that the Commission lists in Article 6 and Annex III of the Regulation), will be subject to a series of strict requirements and obligations (Articles 8-15) before they can be put on the market. These include:

 
  • Adequate risk assessment and mitigation systems;
     
  • High quality of the datasets feeding the system, to minimise risks and discriminatory outcomes;
     
  • Logging of activity to ensure traceability of results;
     
  • Detailed documentation which provides all the necessary information about the system and its purpose so that authorities can assess whether it complies with requirements;
     
  • Information that can clearly and adequately be read by the user;
     
  • Appropriate human oversight measures to minimise risk;
     
  • High level of robustness, security and accuracy.
     

Thirdly, RBI systems will be subject to stricter conformity assessment procedures than those of all other high-risk AI systems in order to ensure that they meet these requirements. Whereas with other high-risk AI systems, the conformity assessment could be conducted by the system provider based on an ex ante assessment and by means of internal checks, RBI will have to undergo an ex ante third-party conformity assessment, because of the particularly high risks that fundamental rights might be breached. The only exception to this would be if RBI providers fully comply with the harmonised standards that are to be adopted by the EU standardisation organisations in this field. If this were the case, RBI systems providers could replace the third-party conformity assessment with an ex ante internal conformity assessment (Article 43(1)). In addition to ex ante conformity assessments, there would also be an ex post system for market surveillance and supervision of RBI systems by competent national authorities designated by the Member States.

 

During the April 26 workshop, several very interesting issues have been discussed by the participants in relation with the obligations of providers under the draft Regulation, the requirements set for RBI systems and the way the conformity assessment should be conducted. Due to space restrictions we cannot elaborate on these issues here, but they will be discussed in extenso in the Workshop’s Report to be published shortly.

 

2) The Lower Section: Constraints for LEAs Users of “Real-Time” RBI in Public Spaces

 

The lower section of the table focuses on the RBI related provisions in the draft Regulation which concern the use of such RBI systems. Once an RBI system has obtained certification, it can be put on the market and be used by public or private actors in accordance with existing, binding EU Law, in particular the GDPR and the LED. However, the draft Regulation intends to introduce new rules and constraints which concern a specific way in which RBI systems are used, namely employing “real-time” RBI in publicly accessible spaces for the purpose of law enforcement (terms defined in Article 3 and also reproduced in our materials accompanying this blog). The draft Regulation announces that using RBI in such a way is to be prohibited, unless it meets the criteria for three exceptions which appear in pink/coral in our table (and in Article 5(1)(d)). One of these exceptions allows for the use of RBI for the “detection, localisation, identification or prosecution of a perpetrator or suspect” who commits one of the 32 categories of criminal offences listed in the Framework Decision on the European Arrest Warrant (in our table, in grey, on the right) on the condition that such offences are punishable in the Member State concerned by a custodial sentence of at least three years.

 

When one compares these proposals with Article 10 of the LED, which already prohibits processing of biometric data by LEAs unless where “strictly necessary”, subject to “appropriate safeguards” and “where authorized by Union or Member State Law”, one may wonder whether they add anything new to the existent legal framework. The answer is, clearly, yes, and this for two main reasons.

 

Firstly, the draft Regulation intends to entirely prohibit certain ways in which RBI is used in publicly accessible spaces for the purpose of law enforcement, such as when the police use facial recognition to identify persons participating in a public protest or persons who committed offences others than the 32 that appear in our table.

 

Secondly, and most importantly, the draft AI Regulation aims to introduce an authorisation procedure that does not yet exist in law. Article 5(3) provides that such real-time uses of RBI by LEAs in publicly accessible spaces shall require prior authorisation “granted by a judicial authority or by an independent administrative authority of the Member State” (most probably the relevant Data Protection Authority). LEAs that intend to use the Article 5(1)(d) exceptions will thus need to submit a “reasoned request” based on a Data Protection Impact Assessment (DPIA) which determines whether all the conditions and constraints of the new instrument and the existing data protection legislation, as well as national law, are met.

 

Having presented our table and the basic structure of the RBI-related provisions of the draft AI Regulation, the second part of this article looks at some interesting issues concerning the use of RBI systems.

 

Note: The second part of this article is republished on TAP at “Facial Recognition-Related Provisions of the EU’s Draft AI Regulation, part 2. The article first appeared in its entirety on the European Law Blog as “Pre-Market Requirements, Prior Authorisation and Lex Specialis: Novelties and Logic in the Facial Recognition-Related Provisions of the Draft AI Regulation.”

 

References:

 

[i] According to Article 3 (36) of the draft Regulation, “remote biometric identification system” means an “AI system for the purpose of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified”.

 

[ii] According to Article 3 (37) of the draft Regulation, ‘‘real-time’ remote biometric identification system’ means “a remote biometric identification system whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay. This comprises not only instant identification, but also limited short delays in order to avoid circumvention”.

 

[iii] According to Article 3 (38) of the draft Regulation, ‘‘post’ remote biometric identification system’ means “a remote biometric identification system other than a ‘real-time’ remote biometric identification system”.

 

** **

 

This paper, “Pre-Market Requirements, Prior Authorisation and Lex Specialis: Novelties and Logic in the Facial Recognition-Related Provisions of the Draft AI Regulation,” was first published at the European Law Blog (ELB) on May 4, 2021. It is reproduced here with the kind permission of the authors, Professor Théodore Christakis and Mathias Becuywe and the ELB Editors.


Share