A Big-Data Analysis System Was Taken to Court and Lost – for Now. The Dutch SyRI Case

By TAP Guest Blogger

Posted on November 18, 2022


Share

This blog post was written for TAP by Dr. Mando Rachovitsa, University of Groningen Faculty of Law.

 

In 2020, the District Court of The Hague rendered its judgment in NCJM et al. and FNV v The State of the Netherlands. The case challenged the Dutch government’s use of System Risk Indication (SyRI) — a big-data analysis system that aimed at preventing and combatting fraud in the areas of income-dependent schemes, taxes and social security. The Court ruled that neither the legislation governing SyRI nor its use met the requirements laid down in Article 8(2) of the European Convention on Human Rights (ECHR) for an interference with the exercise of the right to private life to be necessary and proportionate. The judgment has been lauded as a ‘landmark ruling’ for addressing the human rights implications of the digital welfare state.

 

Victim Status and Standing

 

The case was brought by a coalition of Dutch civil society organisations and two Dutch citizens challenging the Dutch government’s use of SyRI. In reviewing the admissibility of the claims, the Court held that that the two citizens lacked standing due to an inability to show ‘sufficiently concrete and personal interest’. The Court did hear the same complaints brought by the civil society interest groups since a unique provision in the Dutch Civil Code offers the possibility of public interest litigation (Article 305a of Book 3, Dutch Civil Code). If this provision did not exist in Dutch legislation (and it does not in most jurisdictions), the case would have been inadmissible in its entirety. This brings to the fore the limitations of human rights law in effectively grasping the challenges posed by algorithmic decision-making especially with regard to proving harm and substantiating the necessary victim status.

 

The Relevance of the Regulatory Frameworks Governing Algorithmic Systems

 

The Court’s analysis in the SyRI case exemplifies the relevance of three legal/regulatory frameworks governing algorithmic systems, namely data protection, human rights law and algorithmic accountability. Notwithstanding the different priorities and vocabularies of these regimes, they can complement one another. Human rights law and data protection principles (transparency, purpose limitation and data minimisation) formed the Court’s standard of assessment. Algorithmic accountability and transparency (or more accurately the absence thereof) came to weigh heavily in finding a violation of the right to privacy.

 

The principle of transparency is an apt example of how the previously-mentioned regulatory frameworks intertwine. First, transparency (alongside accessibility and foreseeability) was a significant factor for assessing the intrusiveness of the interference with Article 8 of the ECHR and SyRI’s legality. Second, the principle of transparency under the EU General Data Protection Regulation and the detailed data subjects’ rights and data controllers’ obligations were used by the Court to assess the necessity and proportionality of the restriction. Finally, the Court emphasised that the absence of algorithmic transparency as to how SyRI worked inhibited individuals from being able to claim and effectively exercise their rights.

 

With that being said, existing frameworks may fall short of addressing a series of legal issues, including the substantiation of (the risk of) indirect discrimination when prediction analytics come into play; the allocation of the burden of proof when the state refuses to disclose information about a given AI system; or the relevance of human rights law to conceptualising new types of societal harm.

 

The Ramifications of Intentional Opacity: ‘Gaming the System’ Versus the Rule Of Law?

 

The (intentional) opacity surrounding the implementation of algorithms in the public sector not only hampers the effective exercise of human rights but also undermines proper judicial oversight. On multiple occasions, the Court expressly held that the absence of transparency and information about how SyRI worked hampered its ability to answer legal questions, including whether SyRI was a self-learning system used for predictive analytics or whether the risk of discrimination was sufficiently neutralised.

 

The Netherlands’ refusal to disclose additional information, on the grounds that citizens would otherwise ‘game the system’, is an argument invoked by many countries. The UN Special Rapporteur on extreme poverty and human rights, in an amicus curiae submitted to the Court, strongly maintained that disclosing information on how AI processes and socio-technical systems function is a matter of public interest serving transparent decision-making.

 

One of the most crucial questions before a court is how to appreciate the lack of information when assessing a potential human rights violation. According to the UN Special Rapporteur, such a lack of insight entails that the burden of proof falls upon the government to explain convincingly why more openness about such an important system is impossible. The Court seems to have aligned with this position thereby concluding that the state’s failure to provide a convincing explanation for the lack of transparency and alternative safeguards to protect data subjects’ rights was critical for affirming a violation of the right to privacy.

 

Proving (the Risk of) Discriminatory Effects

 

The importance of transparency in connection with the ability to verify the risk model and risk indicators was all the greater since the use of the risk model entailed the risk of discriminatory effects. The plaintiffs submitted that SyRI was used to investigate neighbourhoods known as ‘problem areas’. This use increased the chances of discovering irregularities in those neighbourhoods compared to others and further contributed to the stereotyping and stigmatising of their residents. The Court found that due to the large amounts of data that qualify for processing and the use of risk profiles ‘there is in fact a risk that SyRI inadvertently creates links based on bias, such as a lower socio-economic status or an immigration background’. It was sufficient for the Court that no evidence was presented to suggest that safeguards were put in place to neutralise the risk of discriminatory effects.

 

Proving indirect discrimination can be a challenging task in any context —all the more so when algorithmic systems and prediction analytics come into play. The UN Special Rapporteur's suggestion was that the burden lay with the government to provide evidence dispelling the suspicion that the SyRI’s singular focus on poor and marginalised groups in Dutch society was discriminatory. While this is a viable path, it is unclear what it entails in terms of evidence and burden of proof in cases where algorithmic systems do not necessarily have a singular focus. In such cases it will be highly improbable for plaintiffs to successfully argue for (a risk of) discrimination without insight into the risk factors used by the algorithm.

 

The Questions Left Unanswered

 

The Court chose not to address a series of questions. First, the Court left ‘undiscussed in its review whether the SyRI legislation is sufficiently accessible and foreseeable and as such affords an adequate legal basis’, as required under Article 8(2) of the ECHR. This choice was justified on the basis ‘that the SyRI legislation in any case contains insufficient safeguards for the conclusion that it is necessary in a democratic society’ and an assessment of the adequacy of the legal basis was thus not made. Given the fact that the Court had reservations on whether the legislation governing SyRI met the accessibility and foreseeability criteria, it is unfortunate that it drew no formal conclusion on this matter considering the notable lack of prior scrutiny, democratic oversight and public debate when states introduce legislation on implementing AI systems. In the case of SyRI, the government introduced a legal basis for its functioning years after its initial deployment and the government proceeded with its plans despite repeated warnings from the data protection authority and the Advisory Division of the Council of State as to the quality of said law.

 

Second, in view of its finding that the SyRI legislation violated the right to privacy, the Court deemed it unnecessary to assess whether the legislation was also in breach of Article 22 GDPR. The Court’s disinclination to answer the question of whether the submission of the risk report qualified as automated decision-making is a missed opportunity to clarify the scope of Article 22 GDPR, especially since courts rarely have or take the opportunity to interpret and apply this provision.

 

Conclusion

 

The strategic litigation brought by the coalition of Dutch civil society organisations challenging the Dutch government’s use of SyRI was successful on many fronts. The case was won and the Dutch government stopped using SyRI. The case also raised public awareness around the government’s use of algorithmic systems.

 

SyRI, however, is merely the tip of the iceberg as to the human rights violations and abuses ingrained in the Dutch digital welfare state. In the immediate aftermath of the rendering the SyRI judgment, the Dutch government moved forward with the Data Processing by Partnerships Act – the so-called ‘Super SyRI’ – which also appears to raise serious problems of compatibility with human rights and data protection. Moreover, recently, the Dutch government acknowledged for the first time that institutional racism was a factor in the tax office’s treatment of ethnic minorities. A secret list of 270,000 suspected tax fraudsters was kept based on risk factors including having a second nationality – an illegal and discriminatory practice. These developments take place against the background of a digital welfare state which sustains its own powerful claim to self-referential authority, despite the fact that there is limited empirical evidence to suggest that its use in government is achieving the intended results. In fact, SyRI’s ability to achieve the purported objective of reducing benefit fraud has been seriously disputed.

 

Read the commentary of the judgment in full: Rachovitsa & Johann, ‘The Human Rights Implications of the Use of AI in the Digital Welfare State: Lessons Learned from the Dutch SyRI Case’ (2022) 22 Human Rights Law Review.

 

Dr. Mando Rachovitsa is an Assistant Professor of International law at the University of Groningen. Her research is in the areas of international law, human rights law, and international technology law. Dr. Rachovitsa was the Lecturer of the Year (Faculty of Law, University of Groningen, 2017) making her the first woman, international member of staff, and non-Dutch speaker to get this award in the Faculty of Law.


Share