Facial Recognition-Related Provisions of the EU’s Draft AI Regulation, part 2

By Theodore Christakis and Mathias Becuywe

Posted on May 13, 2021


Share

This post is the second section of “Pre-Market Requirements, Prior Authorisation and Lex Specialis: Novelties and Logic in the Facial Recognition-Related Provisions of the Draft AI Regulation.” It is republished on TAP by permission of its authors, Théodore Christakis and Mathias Becuywe, and the European Law Blog.

 

From the Introduction of “Pre-Market Requirements, Prior Authorisation and Lex Specialis: Novelties and Logic in the Facial Recognition-Related Provisions of the Draft AI Regulation”:

 

The draft Artificial Intelligence Regulation proposed by the European Commission on 21 April 2021 was eagerly anticipated. Its provisions on facial recognition to an even greater degree, given the heated debate going on in the background between those who support a general ban of this technology in public spaces and those who consider that it has “a lot to offer as a tool for enhancing public securityprovided that rigorous red lines, safeguards and standards are introduced. NGOs (such as those who support the “Reclaim Your Face” campaign) and political groups (such as the Greens) have been calling for a total ban of “biometric mass surveillance systems in public spaces”. Contrary to these calls, in their submissions to the public consultation on the White paper, some countries (e.g. France, Finland, the Czech Republic and Denmark) claimed that the use of facial recognition in public spaces is justified for important public security reasons provided that strict legal conditions and safeguards are met (see the Impact Assessment Study, at 18). The results of the public consultation on the White Paper on AI are mixed on the issue of the ban (see here, at 11), but an overwhelming majority of respondents are clearly calling for new rules in this field.

 

The objective of this paper is to present the basic features of this proposed set of rules; to decipher the “novelties” among these when compared with existing rules related to the processing of biometric data, especially Article 9 of the General Data Protection Regulation (GDPR) and Article 10 of the Law Enforcement Directive (LED); and to explain the logic behind the new mechanisms and constraints that have been introduced. Part 1 of this paper [republished on TAP as “Facial Recognition-Related Provisions of the EU’s Draft AI Regulation, part 1”] includes a table that we have produced in order to enable an understanding of the facial-recognition-related provisions of the draft AI Regulation “at a glance”. Part 2 focuses on the rules proposed in the draft to regulate the use of RBI in publicly accessible spaces for the purpose of law enforcement.

 

The analysis below is based on certain highlights of a first high level discussion on this topic organised on April 26, 2021 by the Chair on the Legal and Regulatory Implications of Artificial Intelligence (MIAI@Grenoble Alpes), with the cooperation of Microsoft. The workshop, which was held under Chatham House rules, included representatives of three different directorates-general of the European Commission (DG-Connect, DG-Just and DG-Home), the UK Surveillance Camera Commissioner, members of the EU Agency for Fundamental Rights (FRA) and Data Protection Authorities (CNIL), members of Europol and police departments in Europe, members of the European and the French Parliaments, representatives of civil society and business organisations, and several academics. A detailed report of this workshop and a list of attendees will be published in the coming days on AI-Regulation.Com, where we have already also posted the materials distributed during this workshop that could be very useful for the readers of this blog.

 

II. Use of RBI and Facial Recognition:

 

“Nationalising” the “ban” debate – and other interesting issues

 

Here are some of the highlights of the issues discussed and clarifications given during the April 26 workshop mentioned above.

 

1) What Happens When Facial Recognition is Used in Other ways? The Draft AI Regulation as Lex Specialis

 

The participants of the 26 April workshop agreed that the prohibition in Article 5(1)(d) of the draft AI Regulation does not cover a series of other ways in which RBI and facial recognition is used. In particular the draft does not intend to prohibit:

 
  1. Real-time use of RBI in publicly accessibly spaces by public authorities for purposes other than “law enforcement” (as defined in Article 3(41)). This means, for instance, that local governments are not prohibited under the draft Regulation from using such systems in order to control access to a venue for purposes other than law enforcement (for instance in order to facilitate and accelerate access by people).
     
  2. Real-time use of RBI in publicly accessible spaces by private actors, such as private security companies (unless they are entrusted by the State to exercise public powers for law enforcement purposes). This means that retailers, transport companies or stadiums are not prohibited under the draft Regulation from using real-time RBI for any purpose, including scanning shoppers entering supermarkets to reduce shoplifting and abuse of staff, or preventing fans that have been banned from entering a stadium.
     
  3. Post” RBI, including when it is used by LEAs for the purpose of helping identify, using a photo or video-still, a person who has committed a crime.
     
  4. Use of real-time RBI by all actors (including LEAs) in non-publicly accessible spaces, as defined in Article 3(39) and Recital 9.
     
  5. Any use of facial recognition technologies that does not equate to “RBI” in terms of the meaning given in Article 3(36) and recital 8. This covers, for instance, the use of facial recognition for authentication purposes in security access protocols, where the system is able to determine a one to one match in order to confirm that a person is who they claim to be (example: comparing a passport photo to a passenger or a badge to a person who tries to enter a building).
     

The fact that all these uses of facial recognition are not prohibited by the draft AI Regulation, and are not subject either to “authorisation” under Article 5(3), does not mean, however, that this Regulation does not cover them, nor that these uses are not otherwise regulated by EU law (infra).

 

On the one hand, it must be emphasised, once again, that the draft Regulation contains an important number of novelties vis-à-vis all RBI systems, whether real-time or ex-post, and whether used by public authorities or private actors, in publicly accessible spaces or not. These novelties concern all the pre-market requirements and demanding conformity assessment procedures explained in Part 1 above. These should ensure, for instance, that the RBI systems that are put on the market cater for strict conditions around accuracy, risk management, robustness and cybersecurity, etc. They should also help develop systems that do not contain bias based on ethnic, racial, gender, and other human characteristics. This means that in all the scenarios mentioned in (a), (b), (c) and (d) above, only systems that are in principle certified by third bodies to meet these requirements could be used.

 

On the other hand, it must be stressed that all the uses of RBI and facial recognition mentioned above are already regulated by existing law, namely the GDPR and, when relevant, the LED. Article 9 of the GDPR prohibits, in principle, the processing of biometric data and provides for certain strict exceptions, subject to a series of conditions and safeguards, including the principles of necessity and proportionality. Data protection authorities (DPA) and courts around Europe have already taken the opportunity to declare that the use of facial recognition systems in some cases is illegal because using it in certain ways cannot meet the GDPR requirements. A good example is the February 27, 2020 decision of a French Court, which considered that the “experimental” use of facial recognition in two high schools in the South of France to grant or refuse access to students, did not meet the “consent” requirements of Article 9(2)(a) of the GDPR and did not meet either the “less intrusive means” requirement from the “strict” proportionality test under the GDPR.

 

The participants of the 26 April workshop stressed that the prohibition regarding LEAs that appears in Article 5(1)(d) of the draft AI Regulation is intended to apply as lex specialis with respect to the rules on the processing of biometric data contained in Article 10 of the LED. The Commission therefore decided to focus on the most problematic and dangerous uses of RBI in terms of human rights, namely real-time use by public authorities, in publicly accessible spaces, for law enforcement purposes. While Article 10 of the LED already poses serious obstacles with regard to the processing of biometric data by LEAs (stating that it “shall be allowed only where strictly necessary” and subject to conditions and appropriate safeguards), the Commission sought to go further down this road by regulating such use and the processing of biometric data involved in an exhaustive manner, while also imposing the fundamental condition of prior authorisation by a Court or a DPA when LEAs intend to use real-time RBI. But in any case, for all conceivable uses of facial recognition by public or private actors, existing data protection law still applies.

 

2) To Ban or Not to Ban? That is NOT a Question for the EU, But for Member States

 

An issue that was discussed extensively during the 26 April workshop was why the Commission did not accept the invitation of some stakeholders to ban the use of RBI by LEAs altogether in publicly accessible spaces. Several responses were given to this by participants. These responses focused on the fact that law enforcement authorities in several EU States consider that the use of RBI could be a useful tool for enhancing public security in some circumstances and if subject to appropriate red lines and safeguards. While EU Member States conferred competences to the EU in relation to data protection and respect of fundamental rights, they did not concede powers relating to the maintenance of national security and public order. National security (including the fight against terrorism) remains a competence of the Member States. Identity checks and police controls are also an exclusive competence of the Member States. The fight against crime and preservation of public order are also mainly competences of the Member States, despite the emergence of EU criminal law, which boosts cooperation between LEAs and deals with certain practical problems at an EU level.

 

Against this background, it was probably difficult for the Commission to consider the use of RBI systems by LEAs exclusively through the data protection prism and to ignore that its proposals will directly affect how the police acts on Member State territories, an issue that remains a prerogative of Member States.

 

It is our understanding that the Commission is therefore attempting, to a certain degree, to “nationalise” the debate around the opportunity of banning the use of RBI by LEAs in publicly accessible spaces. Its draft proposals set strong pre-market requirements and conformity assessments for the development of RBI software. They also impose a general ban on the use of real-time RBI for law enforcement purposes which can only be overturned if Member States adopt clear and precise rules of national law in order to authorise such use under one or more of the three possible exceptions. If they do so, they will need to respect both the conditions that already exist under Article 10 of the LED and the additional requirements and safeguards introduced by Article 5 of the draft Regulation, including the need for express and specific authorisation by a judicial authority or by an independent administrative authority (most probably a DPA). As Recital 22 explains:

 

“Member States remain free under this Regulation not to provide for such a possibility at all or to only provide for such a possibility in respect of some of the objectives capable of justifying authorised use identified in this Regulation”.

 

Each State will therefore be able to engage debate on these issues and decide whether it wishes to enact some of the exceptions provided in Article 5. Some EU Member States might refrain from doing so, in which case they will remain under the ban. Others might decide to enact “rules of national law” in order to authorise some uses by LEAs, in which case they will only be able to use software that has been “certified” as meeting the draft Regulation’s requirement and they will also have to respect the strict additional conditions set by the future AI Regulation (once, of course, it enters into force).

 

3) The Meaning of “National Law” in the Draft AI Regulation

 

The previous discussion brings us to another important issue that needs to be clarified. During the April 26 discussions, some participants stressed that the use of RBI by LEAs in publicly accessible spaces is already prohibited in principle by existing EU Law, so the “ban” in Article 5(1)(d) does not seem, as such, to constitute a big novelty – although the additional conditions and requirements brought in by the draft Regulation certainly do.

 

Indeed, Articles 8 and 10 of the LED in principle prohibit processing of biometric data by LEAs unless such processing is “strictly necessary” and is “authorised by Union or Member State law”.

 

The draft Regulation clearly explains in Recital 23 that it “is not intended to provide the legal basis for the processing of personal data” under Articles 8 and 10 of the LED. This means that Member States that wish to use the exceptions provided by Article 5(1)(d) of the draft Regulation will not be able to rely on the “authorised by Union law” clause. They will only be able to use such exceptions if they adopt clear and “detailed rules of national law” (Recital 22).

 

This, in turn, raises the question of what the draft Regulation means when it refers to “rules of national law”. Does this necessarily mean legislative measures adopted by national parliaments? Or could it mean simple non-statutory, regulatory measures adopted by competent bodies (for instance the Prime Minister or the Ministers of the Interior, Home or Justice) in EU Member States?

 

It is striking that the draft Regulation does not explain this issue and does not define what is meant by “rules of national law”. This is clearly an oversight. However, as was stressed during the April 26 workshop, the LED fully applies and its Recital 33 (drafted in the same way as recital 41 of the GDPR) gives a clear answer to this question. According to Recital 33:

 

“Where this Directive refers to Member State law, a legal basis or a legislative measure, this does not necessarily require a legislative act adopted by a parliament, without prejudice to requirements pursuant to the constitutional order of the Member State concerned…”

 

Recital 33 of the LED also explains that such a legal basis in relation to a Member State “should be clear and precise and its application foreseeable for those subject to it, as required by the case-law of the Court of Justice and the European Court of Human Rights” and moves on to highlight a series of specific requirements. The draft AI Regulation introduces additional requirements that “national laws” should enact, which concern both the prior authorisation mechanism and the specific limitations (including temporal and geographical) that the “reasoned request” by LEAs should take into consideration in order to obtain such authorisation.

 

Conclusion

 

All participants in the April 26 workshop acknowledged that the process of adoption of the draft AI regulation will be long. As a first step, the European Data Protection Board and the EDPS have been asked to provide a joint opinion on the draft Commission’s proposals in the next eight weeks, a period during which the draft is also open to public consultation. It will be interesting to see, in the next step, to what extent the Council of the EU and the European Parliament (several Committees of which are competing to instruct the AI legislative proposal) will be able to find common ground on the issues of RBI and facial recognition. As shown by the initial reactions to the draft AI proposal, these issues will be crucial in the discussions about the draft AI regulation.

 

The debate has begun and it is essential to have a good understanding of what is proposed by the Commission…

 

The authors would like to thank Stephanie Beltran Gautron and Maeva El Bouchikhi for their help in drafting this paper.

 

** **

 

This paper, “Pre-Market Requirements, Prior Authorisation and Lex Specialis: Novelties and Logic in the Facial Recognition-Related Provisions of the Draft AI Regulation,” was first published at the European Law Blog (ELB) on May 4, 2021. It is reproduced here with the kind permission of the authors, Professor Théodore Christakis and Mathias Becuywe and the ELB Editors.

 

The first part of this article is republished on TAP as “Facial Recognition-Related Provisions of the EU’s Draft AI Regulation, part 1.”

 
Share