Tags

, , , ,

In an interesting judgment (in Dutch; use Google Translate) delivered by the Hague District Court earlier this week, the Dutch government’s Risk Indication System [“SyRI”] was found to violate the European Convention on Human Rights (read summary). SyRI was an algorithmic system designed to “prevent and combat fraud in the field of social security and income-related schemes, tax and social insurance contributions and labor laws.” It involved using data held by various government departments, and linking it in order to generate a “risk report.” The data then collected would be processed against certain “risk indicators”, and according to a “risk model”, in order to produce the “risk report”. And the “risk report”, in turn, would flag a person as potentially involved in possible fraud (in relation to access to welfare benefits, tax payment etc.), and a possible subject of further investigation. That data that could be processed included information about work, taxes, property ownership, trade, demographic details, and multiple other categories. Therefore:

Collection of Data from Different Government Departments —-> Encryption and Pseudonymising of Data —-> Processing of Data against Risk Indicators and through the Risk Model (First Phase) —> Decryption and De-Anonymising of Data that is Flagged as an “Increased Risk” —-> Further Analysis —–> Preparation of Risk Report —-> Potential further action on the basis of the Risk Report.

The enabling legislation that authorised SyRI was challenged by a group of petitioners and privacy organisations, on the now-familiar grounds of privacy and discrimination. The State defended it on the – also familiar – ground that it was of overriding importance to identify fraudulent claimants, so that the welfare system could continue functioning.

The Framework

As we have seen on multiple occasions by now, the outcomes of such cases – involving dense factual disputes – often hinges upon the initial analytical framework set out by the Court. In this case, the Court began by setting out two important principles. First, it noted that the “development of new technologies also means that the right to the protection of personal data is increasingly important.” An implicit rebuke to the “move fast and break things” school of technological utopianism, the Court emphasised that the greater the scope for technology-enabled intrusion into private life, the greater the importance that must be given to issues of privacy and data protection. Secondly, the Court set out in advance that whatever the stated benefits of SyRI in combating welfare (and other kinds of) fraud, its deployment would have to comply with the data protection standards of transparency, purpose limitation, and data minimisation (see paras 6.31 – 6.33) in a “clear and verifiable way.” This put the burden firmly upon the State in establishing compliance with these fundamental principles, an ensuring, in turn – as we shall see – that unresolved factual disputes would mean a verdict in favour of the citizen and against the State, rather than the other way round.

The Analysis 

The Court began by noting that:

…the State has not made the risk model and the indicators that make up or may consist of the risk model public. In these proceedings, too, [it] did not provide objectively verifiable information to the court in order to enable it to test the views of the State on what SyRI is. The reason given by the State for this is that citizens could adjust their behavior accordingly. This is a conscious choice by the State. (paragraph 6.49)

In this context, the Court noted that the SyRI process involved the use of large data sets (from different sources), their inter-linking, and the potential use of data mining and predictive analysis. The linking of data made it a clear case of profiling (indeed, both parties were in agreement on this). The Court therefore held that while no civil or legal consequence immediately flowed from the preparation of a risk report, it did nonetheless “have a significant effect on the private life of the person on who the report relates to.” Article 8 of the ECHR (the right to privacy) was therefore attracted.

In sum, therefore, SyRI involved profiling individuals on bases that were “secret”, in a manner that impacted their right to privacy, and whose results were not communicated to them. The question then was whether this interference with the right to privacy could be justified as being “necessary in a democratic society.”

During the course of this – essentially – proportionality analysis, the Court accepted the government’s contentions that tackling welfare fraud was both a legitimate State purpose, and a pressing social need. However, it went on to find that SyRI violated all three prongs – transparency, purpose limitation, and data minimisation – of the data protection principles. On the first prong, the Court observed that the “legislation in no way provides information about the factual data that can justify the presence of a particular circumstance, or which objective factual data can justifiably lead to the conclusion that there is an increased risk.” In other words, both the indicators and the risk model – as observed above – were secret. Thus, the Court held:

…it is impossible to check how the simple decision tree that the State is talking about comes about and what steps it consists of. It is thus difficult to see how a person concerned can defend himself against the fact that a risk report has been made with regard to him or her. It is equally difficult to see how a data subject whose data have been processed in SyRI, but who have not led to a risk report, can be aware that his or her data has been processed on the right grounds. (paragraph 6.92)

This, the Court found, was a clear breach of the principle of transparency. The Court did not – as certain other Courts might have done – ask for the indicators and the risk model in a “sealed cover”, so that it could consider their validity for itself. Rather, it straightforwardly held that consequential State decisions involving a violation of the right to privacy could not be made in a non-transparent fashion.

The Court also went on to note that transparency was crucial because of “the risk that (unintended) discriminatory effects will occur.” In fact, relying upon a report submitted by the UN Special Rapporteur for Extreme Poverty, the Court noted that “areas that are already known as problem neighborhoods are being investigated further. As a result, the chance that irregularities are found is higher than in other neighborhoods, which in turn confirms the image of a problem neighborhood, encourages stereotyping and a negative image of the residents who live in the neighborhood is reinforced, even though there is no question of them. risk reporting” (paragraph 6.92). This, of course, is a standard issue with all such algorithmic procedures: recall that it has repeatedly been found, for example, that the use of DNA databanks in crimefighting has a discriminatory effect, as the the composition of the databank is already likely to overrepresent marginlised populations (the known example is that of black people in the USA) – thus leading to a greater chance of false positive, false negatives, and wrongful convictions of members of those populations.

Crucially, in light of this, the Court found that “it is not possible to assess whether this risk [i.e., of discrimination] has been adequately addressed, in the absence of verifiable insight into the risk indicators and the (operation of) the risk model, including the method of analysis by the Inspectorate SZW” (paragraph 6.94). This is a crucial point, that takes us back to the issue of framing. Here – as in other similar cases, such as the litigation around Aadhaar in India and the Huduma Namba in Kenya, the Court had to deal with an information gap on a crucial issue (in this case, the non-disclosure of the indicators and the risk model). Now, there are two ways a Court can respond to this: first, to say that as these are issues of technological design, they are not fit for judicial review, and that therefore, in the absence of adequate information, they will presumptively be decided in favour of the State, in accordance with the principle of judicial deference. The second way, however – and this is how the Dutch Court responded – is to say that given that crucial rights are involved, and given the well-known potential of exclusion and discrimination that comes with such algorithmic systems, the onus lies upon the State to affirmatively demonstrate that there is no violation of rights. This is the role played by the data protection principles of “transparency and verifiability”, and this is how the Court was able to conclude that:

without insight into the risk indicators and the risk model, or at least without further legal safeguards that compensate for this lack of insight, provides insufficient guidance for the conclusion that with the use of SyRI the interference in private life in the light of the abuse and the fraud that is intended to combat is always proportional and therefore necessary. (paragraph 6.95)

The Court then went on to hold that the principles of data minimisation and purpose limitation had also not been complied with. This was because there was no provision for a “pre-test” to determine “whether the interference in private life due to all files linked to that project is necessary, proportionate and subsidiary in view of the specific objective of that project.” (paragraph 6.99)

Conclusion

Questions involving the interface of technology and fundamental rights will – as discussed above – necessarily take Courts into the technical domains. In such situations, the easy thing for the State to do is to engage in information asymmetry, withhold key factual details, and claim that the Court is neither competent, nor authorised, to get into question of technological design. Giving in to these arguments by invoking judicial deference is an easy answer for the Court as well.

The easy route, however, is not the correct one. This is because when you are dealing with technology and fundamental rights, issues of design are crucial: the manner in which a technological system will affect fundamental rights is dependent upon how it is designed (is the data collection centralised? Federated? Can information silos be linked? etc.) Therefore, adjudicating upon issues of design is unavoidable: even when the Court is refusing to engage with those questions, its very non-engagement is an active decision that then grants to the State a kind of impunity to violate fundamental rights by not being constrained in the design of the systems that it deploys.

In that context, the judgment of the Hague Court is crucial, because it squarely takes the bull by the horns, and categorically holds that the absence of information is a reason to hold against the State. This, as we have seen, is possible only with a certain kind of framing: a framing that neither takes technological utopianism at face value, and nor does it take the role of technology in the State’s pursuit of “social goals” such as fraud prevention as an unmitigated good. Rather, it is a framing that takes seriously the potential of technology to violate rights – both privacy and equality – and insists that it is the role of the State to ensure that those concerns are adequately addressed before it can proceed with its project of harnessing technology into the service of its goals. This, I would submit, is far truer to the judicial role of safeguarding fundamental rights.