Earlier this week, the High Court of Kenya delivered a landmark judgment on the constitutional validity of Kenya’s biometric identification system (the National Integrated Identity Management System (NIIMS)/Huduma Namba). In short, the High Court held that (a) the consensual collection of Kenyans’ biometric details for the purpose of identification and verification was constitutionally valid; (b) however, the collection of DNA and GPS details was unconstitutional; and (c) NIIMS itself would have to be halted until the Kenyan government implemented data protection legislation, as well as framed regulations in order to safeguard the data collected.
With this judgment, the Kenyan High Court becomes the third constitutional court in the world (after India and Jamaica) to rule on the constitutionality of centralised biometric ID systems. Before we analyse the judgment, two things must be noted. First, this judgment was delivered by a first-instance Court, following a formal trial and the taking of evidence. There are two further appeals within the Kenyan judicial system and therefore, this is unlikely to be the last word on the subject. And secondly, as indicated above and as will be seen below, the High Court’s decision – at least in part – is a conditional one, where the (legal) future of the NIIMS is expressly made dependant on what action the government will take. Thus, there remain a significant number of issues that remain open for (inevitable) litigation, even after the High Court’s judgment.
National biometric identification systems – and constitutional challenges to them – are, by now, familiar. Indian readers will immediately recall Aadhaar (although, funnily – as the judgment records – Kenyan government lawyers went to some extent to distinguish NIIMS from Aadhaar). Kenya’s NIIMS bears some similarities with Aadhaar, in that it too is a centralised, biometric identification system, that its purpose is to grant a “unique identification number” to registered purpose, and then to use this for the purposes of future verification of identification (see paragraph 3 of the judgment). There are also some differences: NIIMS does not, at this point, appear to have a procedure for authentication of transactions (the heart of Aadhaar); unlike Aadhaar, its use is (so far) open-ended, in that it is not specified that it will be required for a set of purposes, such as subsidies, tax-paying, and so on; the legal framework for NIIMS explicitly envisages “harmonising” of information in different existing databases; and – until the Court struck it down – NIIMS aimed to collect DNA and GPS details.
These differences notwithstanding, as in the case of India as well as Jamaica, the constitutional challenge took a similar form. Apart from a number of procedural issues that we shall not discuss in this post, there were two core issues: privacy/surveillance/data protection on the one hand, and exclusion/equality/discrimination, on the other.
Privacy, Surveillance, and Data Protection: The Analysis
The High Court’s analysis of the privacy issues began at paragraph 705, where it framed the core issues for consideration. As we have discussed previously on this blog, for clarity of understanding, it is helpful do divide the issues into three distinct phases (although there is, of course, overlap between them): data collection (Phase I), data storage (Phase II), and data use (Phase III). It can then be asked: is there a violation of rights at each stage, and if so, whether it is unconstitutional.
In summary – and apart from DNA and GPS collection, which the Court found disproportionately intrusive per se, and struck it down – it was held that (a) collection of biometric data for the purposes of identification was valid, but that its storage or use without an implemented data protection legislation was unconstitutional. The government, thus, was found in breach of its constitutional obligations with respect to Phases II and III, and the project was ordered to be halted until – and unless – the government complied.
It is important to note, however, that the validity of data collection was upheld on the premise that it had been done consensually (paragraph 765). This was the government’s case, and the Court held that the petitioners had not sufficiently established that the data had been taken under compulsion. Interestingly, the Court had another opportunity to rule on whether making enrolment into NIIMS mandatory in order to access entitlements or services would breach the Constitution later in the judgment, while assessing the equality challenge. There, again, it did not issue a finding on the subject. Consequently, while the Court found that (a) there was a strong privacy interest that individuals head in their biometric information (paragraph 760), but that (b) collection of biometric data for the purposes of identification was valid and proportionate, the question of whether compelled collection of biometric details for the same purpose violated the Constitution, was left open. This, of course, raises important issues in its own right, such as the principle of technological self-determination, which grants to individuals the choice of whether and to what extent they will engage with pervasive technological systems, and more specifically, provides them with a choice in how they will choose to identify themselves to the government.
Data Storage and Use
This brings us to the second and third questions: that of data storage and use, or, in simple terms, the actual working of the NIIMS (paragraphs 772 & 773). Once again, for the sake of conceptual clarity, we can divide the challenges into three broad heads. First, there was a foundational challenge to the design of the system itself; as Petitioners’ witness, Anand Venkatanaraynan, pointed out during his evidence, “the law cannot fix what technology has broken.” It was argued, therefore, that the technical architecture of NIIMS – in particular, the decision to have a centralised system – violated constitutional rights. Secondly, there was a more concrete challenge to the legal design: it was argued that NIIMS’ legal framework was open-ended and did not specify the uses that it would be put to. This, therefore, violated the principle of purpose limitation. And thirdly, of course, there was the direct and specific challenge to the functioning of NIIMS in the absence of any data protection framework.
How did the Court answer these three questions? On the first, it held that the design of the system was not subject to judicial review, and therefore, ventured no opinion on it. On the second issue, it held that purpose limitation was indeed built into NIIMS’ legal framework: the purpose of data collection was identification and verification of individuals, and that was why the biometric data had been picked. And on the third, issue, the Court did indeed hold that the absence of a data protection framework made the project unconstitutional (indeed, the Court rapped the government for going forward with the project “in a rush”).
In this context, after the initial hearings had been concluded, the Kenyan Parliament had indeed passed a Data Protection Act. The Court took judicial notice of the Act, and observed that its provisions were “broadly” in line with data protection best practices (the Court sourced these from the OECD) (paragraph 852). Notably, however, that wasn’t enough for the Court: it insisted that until the DPA 2019 was actually implemented on the ground – that is, the Data Protection Authority was established, started functioning, and so on – the project couldn’t go ahead. It also held that until specific statutory regulations were enacted dealing with storage and sharing of data (it cited the Aadhaar Regulations for an example of how this could be done), the project would be halted.
I shall come back to points (a) and (b) later, as I feel that – with respect – the Court’s analysis on both counts was flawed. On point (c), however, two things must be noted: the first is the stark difference between the Kenyan High Court’s judgment, and the Indian Supreme Court’s Aadhaar Judgment. Recall that a “Data Protection Law” was promised by the government as far back as May 2017, even before Puttaswamy-I (privacy) was decided. In both Puttaswamy I (privacy) and II (Aadhaar), the Supreme Court took note of the government’s promises – but to this day, we do not have a Data Protection Act in India (despite Aadhaar now entering its tenth year). By expressly halting NIIMS until the Data Protection Act was implemented (note: not just “enacted”), the Kenyan High Court ensured that there would be no repeat of such bait-and-switch tactics. That said, however, there is a second point: while the Court did observe that the DPA broadly conformed to constitutional standards, a quick look at its provisions suggests that there are some concerning aspects to it. For example, the Kenyan DPA does not require the proportionality test to be satisfied in cases of non-consensual data processing, as long as “public interest” can be shown. Of course, the constitutional validity of the DPA was not itself before the High Court, and therefore, it did not return any detailed findings on the issue. Presumably now, however, if the Kenyan government implements the DPA and then goes ahead with NIIMS, the DPA itself will become the subject of constitutional litigation sooner rather than later.
Equality and Non-Discrimination: The Analysis
In a somewhat disappointing outcome, the High Court held that the challenges on grounds of equality and non-discrimination did not succeed. These challenges had been brought by groups representing Kenya’s Nubian population, which had been historically subjected to exclusion and discrimination – including discrimination in access to IDs. The High Court found that the NIIMS framework was neutrally worded, and did not impose any additional onerous requirements on Nubians as far as access to documentation was concerned. And on the issue of exclusion in case NIIMS enrolment was made mandatory for access to government services, the Court noted – in somewhat anodyne terms – that while exclusion was a matter of concern, there was no going back to the paper age; consequently, issues of exclusion would have to be tackled through “appropriate regulatory mechanisms”, but that was not adequate ground for holding NIIMS itself unconstitutional.
Perhaps the Court here was hampered by the lack of direct evidence of exclusion, as – unlike Section 7 of the Aadhaar Act – NIIMS is not at present mandatory for accessing entitlements or government subsidies. That said, with respect, the issues of equality and non-discrimination are more nuanced and layered than the Court gave credit for, and in due course, this issue will – hopefully – be revisited.
Design and Purpose Limitation: Two Flaws
While many parts of the High Court’s judgment are persuasive and strongly reasoned (as indicated above), there are two areas where, with respect, the Court got it wrong, in my view. I discuss these below.
The first is the Court’s refusal to go into the question of the design of NIIMS (paragraphs 875, 876, and 882). The Court’s hesitation is entirely understandable: this is a technical issue, and the judiciary does not necessarily have the expertise to rule on technology. That, however, is neither here nor there: expert evidence was led on both sides, and the Court records the evidence of the witnesses with great facility.
More importantly, however, the Court cannot evade addressing questions of design, because when you have technological system like India’s Aadhaar or Kenya’s NIIMS, design and fundamental rights are inextricably bound up with each other (a point made by Chandrachud J. in his dissenting judgment in Aadhaar). This was also a point I highlighted a little earlier, while examining the Hague District Court’s judgment on SyRI: the choices that system designers make at the time of design have a direct impact upon how and to what extent the system, in its final form, will impact civil rights. For example, a centralised biometric identification system allows for seeding and profiling in a way that a decentralised system (Estonia’s example was specifically taken) does not. This was, of course, argued by petitioners in the Aadhaar case as well, where smart cards were put forward as a less intrusive alternative to the centralised database (as we know, the Supreme Court dodged the issue there as well, by pretending that it was never argued).
Why is this important? This is important because under the proportionality standard (applicable in both India and Kenya), the State is required to select – out of a range of choices open to it – the method that will infringe rights to the least degree, in pursuit of its goals (the “necessity” step). Thus, if I – as the system designer – have before me two design choices (say, centralised and decentralised), and I choose the one that enables or facilitates a greater degree of rights violations, then at the moment at which the State approves that design choice, it violates the proportionality standard.
Now of course, a Court may find that the choice of a centralised system does not violate proportionality. The point, however, is that a Court cannot avoid engaging with – and answering – that question. To do so is to implicitly endorse the State’s choice of design, and, by implication, take design questions out of the purview of constitutional reasoning altogether. Therefore, when the High Court itself noted just after declaring that it would not be looking at design, that it would be “confining” itself with issues of “privacy and data protection” (paragraph 876), it necessarily followed from that that it would have to deal with issues of design as well: because it could not deal with privacy and data protection without factoring in how the choice of design impacted both issues. In such a situation, to abstain would amount to an abdication of the judicial role.
Secondly, it is respectfully submitted that the Court misapplied the requirement of purpose limitation. To put it very simply, purpose limitation – in the context of data protection – requires that information collected be used only for the purpose for which it is specified, and nothing beyond. Petitioners had argued that as NIIMS was entirely open-ended, and did not specify what the information was going to be used for, purpose limitation had been violated. To this the Court responded that the purpose was “verification”, and therefore, there was no violation (paragraph 787).
This, however, is flawed. Let me explain with the help of a hypothetical. Suppose I am a police officer, and I go to the Magistrate for a warrant to search a specific house. The Magistrate asks me, ‘what is your purpose in searching this house?’ I answer: ‘to find out what is inside.’ If the Magistrate has any sense, he will refuse the warrant. The point is simple: if “purpose” is defined in the very terms of the activity itself, then all you get is a tautology. ‘Why have you jailed this person?’ ‘To deprive them of liberty.’ ‘Why are you collecting identifiable biometric data?’ ‘To identify people.’ etc.
Purpose limitation, therefore, is not satisfied by holding that identifying data is being collected with the purpose of identifying people: the correct question is what are people being identified for. In the Indian context, for instance, there were a set of defined purposes for which Aadhaar was supposed to be used as an identifier, that were set out in legislation (the efficacy of that is something I will not get into here): accessing government subsidies, banking, buying a SIM card, and paying taxes. When we look at it this way, we also see another reason why purpose limitation is important: there needs to be an opportunity to challenge the collection and use of biometric data with respect to the specific purpose that it is being put to. In the Aadhaar case, for example, the Supreme Court found that it was proportionate for accessing subsidies and paying tax, but disproportionate for buying SIM Cards and opening bank accounts. A general, open-ended “purpose” of identification (as is set out in the NIIMS statutory framework) would have made these specific challenges impossible.
The “purpose”, therefore, has to be set out in concrete terms: why, specifically, is this data being collected, and what specific use is it going to be put to? With respect, the High Court’s framing of the issue betrayed the very assumptions that would lead it to the wrong answers.
The judgment of the High Court of Kenya provides us with a strong and well-reasoned analysis of the NIIMS framework, and has some important findings: in particular, on the strong privacy interests in biometric data, as well as the necessity to implement data protection laws before taking on a nation-wide data collection exercise. That said, on issues of design and purpose limitation, the High Court’s conclusions may need reconsideration. And on a third set of issues (the data protection framework itself), the field remains open. What is certain is that this is only the first salvo in a long struggle, and the progress of this case through the Kenyan courts will be fascinating to watch.
(Disclaimer: The author provided research assistance to some of the petitioners in this case).