Facial Recognition: A Quest for Clarity Behind the "Catch-All" TermPublication Date: June 09, 2022 10 minute read
There is an important debate going on worldwide about the “red lines” that should be established by regulators in order to prevent endangering people’s freedoms as the result of the use of facial recognition technologies. This debate is crucial. Indeed, beyond the technicalities of the debate, political choices have to be made in order to shape what our society will look like tomorrow: given the power of this technology, how can we reconcile the protection of fundamental rights and freedoms with security, economic considerations and technological competitiveness issues?
- from “Facial Recognition: A Quest for Clarity: Unpicking the “Catch-All Term.” Theodore Christakis, Professor of International and European Law at Université Grenoble Alpes, is the project leader.
Facial recognition technologies (FRT) can be used in both the private and public spheres to perform multiple functions: authentication, identification, surveillance, emotion recognition, etc. They involve the processing of particularly sensitive biometric data. Unfortunately, people have a tendency sometimes of treating facial recognition as a single block, lumping the different uses of the technology together. Debates over how to regulate the use of FRT often lack clarity, consistent definitions, and a comprehensive understanding of how the distinct functionalities work.
The MAPping the Use of Facial Recognition in Public Spaces in Europe (MAPFRE) project is an independent study with the objective of analyzing the different functionalities and uses of facial recognition, exploring the related legal issues (i.e., data protection, algorithmic bias, use of facial analysis within the context of criminal investigations), and presenting 25 use cases. The authors hope that these reports will be interesting and helpful for policymakers, all stakeholders developing and using these technologies, and citizens to understand the issues in order to be able to participate in the very important facial recognition debate in Europe.
In “Facial Recognition: A Quest for Clarity: Unpicking the ‘Catch-All’ Term,” the first of six reports from the MAPFRE project, Theodore Christakis (project leader) and his coauthors Karine Bannelier, Claude Castelluccia, and Daniel Le Métayer present the current political landscape, dive into an analysis of the problems of definitions for key facial recognition terms, and explain the project’s main objectives and methodological tools. Below is the Executive Summary from this first report.
Executive Summary from “A Quest for Clarity: Unpicking the ‘Catch-All’ Term.”
The first report from the Mapping the Use of Facial Recognition in Public Spaces in Europe (MAPFRE) project. It is written by Theodore Christakis (project leader), Karine Bannelier, Claude Castelluccia, and Daniel Le Métayer. Additional contributors are: Alexandre Lodie, Stephanie Celis Juarez, Coralie Pison-Hindawi, and Anaïs Trotry.
Regulating the use of facial recognition and face analysis in public spaces is undoubtedly one of the most pressing issues today when it comes to the regulation of artificial intelligence in democratic societies. There is an important debate going on worldwide about the “red lines” that should be established by regulators in order to prevent people’s freedoms being endangered as the result of the use of facial recognition technologies (FRT). In Europe especially, where privacy, data protection and human rights lie at the very heart of the European integration project, this debate is more necessary and pressing than ever. The importance of this issue is reflected in the ongoing legislative work that has followed the European Commission’s introduction, in April 2021, of the draft AI regulation, which includes several important proposals to regulate the use of facial recognition.
Curiously, though, the debate about these fundamental questions is taking place in the absence of a profound assessment of how existing European law is being applied to these issues. Furthermore, the debate on these issues in Europe is also characterised by a high level of imprecision. Journalists, activists and politicians sometimes have a tendency to treat “facial recognition” as a single monolithic bloc, lumping the different functionalities and uses of facial recognition together. In contrast, in an important Opinion published in 2019 the French DPA [Data Protection Act], CNIL [Commission Nationale de l'Informatique et des Libertés], stressed the importance of clarity and precision to fostering the conditions necessary for an informed and useful debate. “Behind the catch-all term, there are multiple use cases” said the CNIL, adding that “in this context, a use-by-use approach must be applied”.
This is precisely the main objective of the “MAPping the use of Facial Recognition in public spaces in Europe” (MAPFRE) project. Our intention is to offer a detailed independent study that separately presents and analyses the different categories of FRT use in publicly accessible places in the European Union and the UK. The intention of our project is to publish a series of reports which include:
- the general context and objectives of the project as well as an analysis of the problem of definitions (Part 1);
- a detailed explanation of the different facial processing functionalities and applications in public spaces in Europe using a classification table, illustrations and charts (Part 2);
- a first ever detailed report on the use of facial recognition for authorization purposes in public spaces in Europe (Part 3);
- a report which focuses on the important issue of the use of FRT in criminal investigations (Part 4);
- a deep dive into the equally important issue of large-scale face matching/identification (what the AI draft regulation calls “real-time remote biometric identification”) (Part 5);
- and, finally, a report which discusses the use of “face analysis” in public spaces (which remains marginal in Europe but is likely to develop in the future) and which also presents the general perspectives and recommendations of the MAPFRE project (Part 6).
At the end of this project, we will also present an analysis of “25 selected cases”, illustrating the different categories in our classification table, as well as analysing other cases more briefly.
The current “Report 1” presents the major positions on the debate surrounding the use of FRT as well as the preliminary positions adopted by Members of the European Council and Parliament during the ongoing legislative process concerning the draft AI regulation.
It then dives into the important issue of definitions. Our study shows that the existing definition of “biometric data” in the GDPR [General Data Protection Regulation] and the LED [Law Enforcement Directive] is problematic and confusing. This has compelled some actors to propose amending it in the draft AI Act. However, the consequences of such an amendment could be significant as it is difficult to imagine how we could have a different definition of “biometric data” in the GDPR and the LED to that in the AI Act. Other stakeholders, especially the Rapporteurs of the European Parliament, have instead proposed creating an entirely new category called “biometrics-based data”. While the intentions of the Rapporteurs are understandable, the creation of a new category so similar to the original one might create further confusion in this field.
Following this important discussion, we explain the scope of our study. We cover the use of both “facial recognition” and “face analysis” in public spaces (and we explain the difference between the two terms). Drawing on the draft EU AI Regulation, we also define how we use the term “public spaces”. Finally, with regard to the territorial scope of our study, we explain why we have decided to include cases that originate not only from EU Member States but also from the UK.
Finally, we explain the methodological tools that we have used. The first tool that we have elaborated is a “Classification Table”, which illustrates the uses of facial recognition/analysis in public spaces. This table, to be published in “Part 2” of our MAPFRE series, tries to present in the most accurate and accessible way the different facial processing functionalities and applications used in public spaces. The second methodological tool that we have elaborated is a detailed analytical framework which asks a number of key questions. The template for this analytical framework is presented in an annex to this paper. To summarise it, it involves 3 series of questions: a series of questions on the facts and technical details of the use case; a second series of questions on Human Rights and the principles relating to the processing of personal data; and a third part which tries to identify whether any additional guarantees were offered by the data controller, focusing on issues such as accountability and transparency, whether a Data Protection Impact Assessment (DPIA) was conducted, and whether there was an evaluation of the effectiveness of the system. We have applied this analytical framework as a means of analysing 25 interesting use cases in detail, covering the various functionalities and applications found in our classification table. Aside from these 25 “selected” case studies, which we will publish at the end of the project, we have extensively analysed several other important cases of FRT deployment in public spaces in Europe.
We hope that our study will be useful not only to policy-makers, stakeholders, scholars and citizens who may be interested in the issue of facial recognition/analysis, but also anyone interested in how major human rights and data protection principles, such as the principle of lawfulness, the principles of necessity and proportionality or other principles relating to the processing of personal data, are interpreted. Indeed, during our research into how facial recognition systems are deployed in Europe, we found a treasure trove of information that includes documents produced by data controllers, legal challenges introduced by civil society, positions of DPAs [Data Protection Authorities], judgments of national courts, articles published by scholars and journalists, and other material. We expect that all of this material will be of great interest not only in terms of the regulation of facial recognition, but also in terms of understanding how the GDPR, the LED and European HR Law apply to a number of important fields.
Read the full report, Part 1 of Mapping the Use of Facial Recognition in Public Spaces in Europe (MAPFRE): “A Quest for Clarity: Unpicking the 'Catch-All' Term” from the AI-Regulation.com website.Read More:
Part 2 of the MAPFRE project, “Classification” provides a path to understanding how the different facial recognition and facial analysis technologies work. This report includes a “Classification Table” which details how the different facial processing functionalities and applications are used in public spaces.
Part 3 of the MAPFRE project, “Facial Recognition for Authorisation Purposes” is the first ever detailed analysis of what is the most widespread way in which Facial Recognition is used in public (& private) spaces: to authorise access to a place or to a service.
Parts 4 through 6 will be published in the near future. Look for them on the article page of the AI-Regulation.com website.
This Executive Summary of the first report from the Mapping the Use of Facial Recognition in Public Spaces in Europe project, titled, “A Quest for Clarity: Unpicking the "Catch-All" Term” was first published on the AI-Regulation.com website on May 16, 2022. It is reproduced here with the kind permission of the project leader, Professor Théodore Christakis.
Disclosure: Microsoft is a corporate sponsor of AI-Regulation.com, the website of the Chair on the Legal and Regulatory Implications of Artificial Intelligence at MIAI Grenoble Alpes, and Microsoft also sponsors the Technology | Academics | Policy (TAP) website. Microsoft respects academic freedom, and is working to enable the dialogue on the most critical tech policy issues being debated.