Coronavirus and the Constitution – XXI: The Mandatory Imposition of the Aarogya Setu App

The extension of the “nationwide lockdown” by another two weeks has brought with it a slew of further directions under the National Disaster Management Act. Many of these directions exacerbate the problems pointed out in previous posts. For example, unlike previous directions, this one actually does impose a physical curfew (between 7PM and 7AM), and directs local authorities to pass necessary orders implementing it. This particular direction lies at the intersection of rule by executive decree and the undermining of federalism, as discussed previously. In this post, however, I want to briefly consider Guideline 15 of Annexure 1, which mandates the use of the government’s contact tracing app – Aarogya Setu – for all private and public employees, and obligates employers to ensure 100% coverage.

To those who have followed the many twists and turns of the Aadhaar story, this metamorphosis from “voluntary” to “voluntary-mandatory” to “effectively mandatory” will have a familiar ring – the pandemic probably just accelerated the pace of transformation from a few years to a few weeks. The mandatory imposition of Aarogya Setu through executive decree, however, suffers from serious legal problems, discussed below.

The Absence of Anchoring Legislation

As pointed out repeatedly on this blog, the legal framework for the government’s pandemic management strategy has been the National Disaster Management Act, which has an umbrella clause permitting the issuance of guidelines and directions aimed at addressing disasters. Previously on this blog, we have discussed the separation of powers and other democratic problems that come with using vague enabling legislation to anchor a wide-reaching executive response. When it comes to the infringing of rights, however, the problem is even more acute: Part III of the Constitution requires that even before we get to the discussion of whether a rights violation is justified or not, there must exist a law that authorises it. Any such law has to be specific and explicit with respect to the rights that it seeks to infringe, the bases of infringement, the procedural safeguards that it establishes, and so on.

The NDMA cannot be such a law, because it says absolutely nothing about the circumstances, manner, and limitations under which the government is authorised to limit or infringe civil rights (in this case, the right to privacy). The enabling clauses do not help, because – as pointed out above – they are generic enough so as to permit just about any executive decree that (the executive believes) is required to tackle the disaster. If the NDMA was indeed accepted as the basis, then this would effectively subvert the legality requirement entirely and across the board: there could, hypothetically, be one single umbrella legislation that stipulates that “the government may do anything that it believes is reasonable to achieve the public interest” , and do away with any further need for lawmaking in toto. This, however, is the very definition of rule by executive, instead of the rule by and of law.

It should be noted that the proposition I am advancing here is a very basic one. Last week, for example, the High Court of Kerala refused to allow the government to cut salaries without specific legislation authorising it (the Court correctly observed that the existing provisions of the Epidemics Act and the Kerala Covid-19 Ordinance were far too generic to authorise such a step). We shall discuss the judgment of the Kerala High Court in a subsequent post, but for now, suffice it to say that this is not just a basic proposition under Indian law, but a basic proposition everywhere. The Israeli High Court – not exactly known for being a hotbed of bleeding-heart liberal jurisprudence – held a few days ago that the Shin Bet could not engage in surveillance without authorising legislation. A few months ago, the High Court of Kenya held that GPS Coordinates and DNA samples could not be collected under cover of a general law, but – at the very least – would require “anchoring legislation” to do so.

The requirement of specific legislation is not a mere procedural quibble, but a crucial constitutional point. One, of course, is the separation of powers issue, which we have discussed before: if the State is going to mandate an intrusive, data-collecting app upon its citizens, then the least that ought to be done is that it be authorised by the citizens’ elected representatives, in Parliament. Equally importantly, however, a hypothetical “Aarogya Setu law” will necessarily have to demonstrate constitutional compliance with respect to data protection principles. A good example of this – again – is the history of Aadhaar: once it became clear to the government that it actually had to pass an Aadhaar Act, the accompanying infrastructure – including limitations upon the use of Aadhaar – also had to be considered. Writing out these provision in law also enabled an informed challenge in Court, where at least a part of the Act was struck down for being unconstitutional (I need not go over that again here). Blithely mandating Aarogya Setu in one sentence through an executive decree tears the constitutional architecture to shreds.

The Proportionality Test(s)

Given the government’s penchant for Ordinances (the Kerala government has, for example, issued an ordinance to get around the High Court’s salaries judgment), the requirement of legislation is unlikely to present an effective check upon executive abuse. That, however, makes it important to highlight that there exist serious substantive constitutional concerns with the mandatory use of the Aarogya Setu app.

As is well known, the proportionality standard for adjudicating whether a violation of the right to privacy is justified or not has four prongs: legality (requirement of a law, with a legitimate purpose), suitability (the government’s action must be suitable for addressing the problem, i.e., there must be a rational relationship between means and ends), necessity (i.e., it must be the least restrictive alternative), and proportionality stricto sensu (there must be a balance between the extent to which rights are infringed and the State’s legitimate purpose).

There is, by now, extensive literature on the question of the very effectiveness of contact-tracing apps to fight a pandemic such as Covid-19. As this Brookings Paper shows, (a) contact tracing is effective where there exists large-scale testing capacity and less spread (the first condition certainly does not exist in India today); (b) there is a high risk of false positives and false negatives, something that gets worse as the population size increases (recent examples in India bear testimony to this); (c) the absence of complete smartphone penetration can defeat the purpose (particularly true for India) (the authors also point out other risks, such as social stigmatisation). It is, therefore, an open question whether the second limb of the proportionality test – suitability/rationality – is satisfied.

The problem grows more severe when we come to the necessity prong (discussed previously on this blog as well). The data collection practices of the Aarogya Setu app – and how they fall short of constitutional standards – have already been discussed extensively (see here, here, here, and here). Now, it is not the purpose of this post to engage in a detailed technical discussion about whether the Aarogya Setu app complies with the third limb of the proportionality standard or not (much of that work may be accessed in the links above). However, there is a broader legal point that needs to be noted. This is the issue of burden: it is well-established under Indian constitutional jurisprudence – most recently in the Aadhaar judgment – that once a prima facie violation of privacy has been demonstrated, the burden of justification (under the proportionality standard) shifts to the State. In other words, it is for the State to show that the suitability and necessity prong of the proportionality standard are satisfied. A necessary corollary of this is that as far as the suitability prong goes, the State cannot mandate the use of a privacy infringing app before it is first demonstrably established that a means-ends relationship actually exists. Thus, if – as the Brookings analysis shows – there is a non-trivial likelihood that the app in question cannot achieve the very (legitimate) purpose that it is designed for, it cannot be made mandatory.

Secondly, as far as the necessity prong goes, it creates a constitutional obligation upon the State to be transparent about the basis for choosing this app, designed in this way. Were less intrusive alternatives considered (see the IFF working paper linked above)? If so, were they found non-suitable for the goal? If not, why were they rejected? And even if not, why is there not a mandatory sunset clause here? Once again, this is not a radical legal proposition: in the Aadhaar judgment, the mandatory linking of bank accounts with Aadhaar was struck down precisely on the basis that there existed less restrictive alternatives, and that the government had comprehensively failed to provide any reasons why they had not been considered. It is fair to say that if the government cannot even show why it has chosen a more intrusive data collecting app over a less intrusive alternative (that exists), then it is in no sense a constitutionally justified decision.

Conclusion

The government directive mandating Aarogya Setu for all public and private employees suffers from serious legal flaws. In the absence of a specific anchoring legislation, it fails the first limb of the proportionality test. And on more substantive grounds, the government bears the burden of showing that the design of the app satisfies both the suitability and the necessity prongs of the test – a burden that, thus far, remains undischarged (indeed, going by blithe ministerial statements about how the app might continue to be in use for two years, there seems to be very little appetite in the government to even attempt to discharge that burden). There would, therefore, appear to be excellent legal grounds for a challenge to the NDMA Direction; of course, the prospect of any such challenge succeeding at a time when the Court appears to have withdrawn itself from its task of rights adjudication, is another matter.

Notes From a Foreign Field: The Kenyan High Court’s Judgment on the National Biometric ID System

Earlier this week, the High Court of Kenya delivered a landmark judgment on the constitutional validity of Kenya’s biometric identification system (the National Integrated Identity Management System (NIIMS)/Huduma Namba). In short, the High Court held that (a) the consensual collection of Kenyans’ biometric details for the purpose of identification and verification was constitutionally valid; (b) however, the collection of DNA and GPS details was unconstitutional; and (c) NIIMS itself would have to be halted until the Kenyan government implemented data protection legislation, as well as framed regulations in order to safeguard the data collected.

With this judgment, the Kenyan High Court becomes the third constitutional court in the world (after India and Jamaica) to rule on the constitutionality of centralised biometric ID systems. Before we analyse the judgment, two things must be noted. First, this judgment was delivered by a first-instance Court, following a formal trial and the taking of evidence. There are two further appeals within the Kenyan judicial system and therefore, this is unlikely to be the last word on the subject. And secondly, as indicated above and as will be seen below, the High Court’s decision – at least in part – is a conditional one, where the (legal) future of the NIIMS is expressly made dependant on what action the government will take. Thus, there remain a significant number of issues that remain open for (inevitable) litigation, even after the High Court’s judgment.

The Issues

National biometric identification systems – and constitutional challenges to them – are, by now, familiar. Indian readers will immediately recall Aadhaar (although, funnily – as the judgment records – Kenyan government lawyers went to some extent to distinguish NIIMS from Aadhaar). Kenya’s NIIMS bears some similarities with Aadhaar, in that it too is a centralised, biometric identification system, that its purpose is to grant a “unique identification number” to registered purpose, and then to use this for the purposes of future verification of identification (see paragraph 3 of the judgment). There are also some differences: NIIMS does not, at this point, appear to have a procedure for authentication of transactions (the heart of Aadhaar); unlike Aadhaar, its use is (so far) open-ended, in that it is not specified that it will be required for a set of purposes, such as subsidies, tax-paying, and so on; the legal framework for NIIMS explicitly envisages “harmonising” of information in different existing databases; and – until the Court struck it down – NIIMS aimed to collect DNA and GPS details.

These differences notwithstanding, as in the case of India as well as Jamaica, the constitutional challenge took a similar form. Apart from a number of procedural issues that we shall not discuss in this post, there were two core issues: privacy/surveillance/data protection on the one hand, and exclusion/equality/discrimination, on the other.

Privacy, Surveillance, and Data Protection: The Analysis

The High Court’s analysis of the privacy issues began at paragraph 705, where it framed the core issues for consideration. As we have discussed previously on this blog, for clarity of understanding, it is helpful do divide the issues into three distinct phases (although there is, of course, overlap between them): data collection (Phase I), data storage (Phase II), and data use (Phase III). It can then be asked: is there a violation of rights at each stage, and if so, whether it is unconstitutional.

Data Collection

In summary – and apart from DNA and GPS collection, which the Court found disproportionately intrusive per se, and struck it down – it was held that (a) collection of biometric data for the purposes of identification was valid, but that its storage or use without an implemented data protection legislation was unconstitutional. The government, thus, was found in breach of its constitutional obligations with respect to Phases II and III, and the project was ordered to be halted until – and unless – the government complied.

It is important to note, however, that the validity of data collection was upheld on the premise that it had been done consensually (paragraph 765). This was the government’s case, and the Court held that the petitioners had not sufficiently established that the data had been taken under compulsion. Interestingly, the Court had another opportunity to rule on whether making enrolment into NIIMS mandatory in order to access entitlements or services would breach the Constitution later in the judgment, while assessing the equality challenge. There, again, it did not issue a finding on the subject. Consequently, while the Court found that (a) there was a strong privacy interest that individuals head in their biometric information (paragraph 760), but that (b) collection of biometric data for the purposes of identification was valid and proportionate, the question of whether compelled collection of biometric details for the same purpose violated the Constitution, was left open. This, of course, raises important issues in its own right, such as the principle of technological self-determination, which grants to individuals the choice of whether and to what extent they will engage with pervasive technological systems, and more specifically, provides them with a choice in how they will choose to identify themselves to the government.

Data Storage and Use

This brings us to the second and third questions: that of data storage and use, or, in simple terms, the actual working of the NIIMS (paragraphs 772 & 773). Once again, for the sake of conceptual clarity, we can divide the challenges into three broad heads. First, there was a foundational challenge to the design of the system itself; as Petitioners’ witness, Anand Venkatanaraynan, pointed out during his evidence, “the law cannot fix what technology has broken.” It was argued, therefore, that the technical architecture of NIIMS – in particular, the decision to have a centralised system – violated constitutional rights. Secondly, there was a more concrete challenge to the legal design: it was argued that NIIMS’ legal framework was open-ended and did not specify the uses that it would be put to. This, therefore, violated the principle of purpose limitation. And thirdly, of course, there was the direct and specific challenge to the functioning of NIIMS in the absence of any data protection framework.

How did the Court answer these three questions? On the first, it held that the design of the system was not subject to judicial review, and therefore, ventured no opinion on it. On the second issue, it held that purpose limitation was indeed built into NIIMS’ legal framework: the purpose of data collection was identification and verification of individuals, and that was why the biometric data had been picked. And on the third, issue, the Court did indeed hold that the absence of a data protection framework made the project unconstitutional (indeed, the Court rapped the government for going forward with the project “in a rush”).

In this context, after the initial hearings had been concluded, the Kenyan Parliament had indeed passed a Data Protection Act. The Court took judicial notice of the Act, and observed that its provisions were “broadly” in line with data protection best practices (the Court sourced these from the OECD) (paragraph 852). Notably, however, that wasn’t enough for the Court: it insisted that until the DPA 2019 was actually implemented on the ground – that is, the Data Protection Authority was established, started functioning, and so on – the project couldn’t go ahead. It also held that until specific statutory regulations were enacted dealing with storage and sharing of data (it cited the Aadhaar Regulations for an example of how this could be done), the project would be halted.

I shall come back to points (a) and (b) later, as I feel that – with respect – the Court’s analysis on both counts was flawed. On point (c), however, two things must be noted: the first is the stark difference between the Kenyan High Court’s judgment, and the Indian Supreme Court’s Aadhaar Judgment. Recall that a “Data Protection Law” was promised by the government as far back as May 2017, even before Puttaswamy-I (privacy) was decided. In both Puttaswamy I (privacy) and II (Aadhaar), the Supreme Court took note of the government’s promises – but to this day, we do not have a Data Protection Act in India (despite Aadhaar now entering its tenth year). By expressly halting NIIMS until the Data Protection Act was implemented (note: not just “enacted”), the Kenyan High Court ensured that there would be no repeat of such bait-and-switch tactics. That said, however, there is a second point: while the Court did observe that the DPA broadly conformed to constitutional standards, a quick look at its provisions suggests that there are some concerning aspects to it. For example, the Kenyan DPA does not require the proportionality test to be satisfied in cases of non-consensual data processing, as long as “public interest” can be shown. Of course, the constitutional validity of the DPA was not itself before the High Court, and therefore, it did not return any detailed findings on the issue. Presumably now, however, if the Kenyan government implements the DPA and then goes ahead with NIIMS, the DPA itself will become the subject of constitutional litigation sooner rather than later.

Equality and Non-Discrimination: The Analysis

In a somewhat disappointing outcome, the High Court held that the challenges on grounds of equality and non-discrimination did not succeed. These challenges had been brought by groups representing Kenya’s Nubian population, which had been historically subjected to exclusion and discrimination – including discrimination in access to IDs. The High Court found that the NIIMS framework was neutrally worded, and did not impose any additional onerous requirements on Nubians as far as access to documentation was concerned. And on the issue of exclusion in case NIIMS enrolment was made mandatory for access to government services, the Court noted – in somewhat anodyne terms – that while exclusion was a matter of concern, there was no going back to the paper age; consequently, issues of exclusion would have to be tackled through “appropriate regulatory mechanisms”, but that was not adequate ground for holding NIIMS itself unconstitutional.

Perhaps the Court here was hampered by the lack of direct evidence of exclusion, as – unlike Section 7 of the Aadhaar Act – NIIMS is not at present mandatory for accessing entitlements or government subsidies. That said, with respect, the issues of equality and non-discrimination are more nuanced and layered than the Court gave credit for, and in due course, this issue will – hopefully – be revisited.

Design and Purpose Limitation: Two Flaws

While many parts of the High Court’s judgment are persuasive and strongly reasoned (as indicated above), there are two areas where, with respect, the Court got it wrong, in my view. I discuss these below.

Design

The first is the Court’s refusal to go into the question of the design of NIIMS (paragraphs 875, 876, and 882). The Court’s hesitation is entirely understandable: this is a technical issue, and the judiciary does not necessarily have the expertise to rule on technology. That, however, is neither here nor there: expert evidence was led on both sides, and the Court records the evidence of the witnesses with great facility.

More importantly, however, the Court cannot evade addressing questions of design, because when you have technological system like India’s Aadhaar or Kenya’s NIIMS, design and fundamental rights are inextricably bound up with each other (a point made by Chandrachud J. in his dissenting judgment in Aadhaar). This was also a point I highlighted a little earlier, while examining the Hague District Court’s judgment on SyRI: the choices that system designers make at the time of design have a direct impact upon how and to what extent the system, in its final form, will impact civil rights. For example, a centralised biometric identification system allows for seeding and profiling in a way that a decentralised system (Estonia’s example was specifically taken) does not. This was, of course, argued by petitioners in the Aadhaar case as well, where smart cards were put forward as a less intrusive alternative to the centralised database (as we know, the Supreme Court dodged the issue there as well, by pretending that it was never argued).

Why is this important? This is important because under the proportionality standard (applicable in both India and Kenya), the State is required to select – out of a range of choices open to it – the method that will infringe rights to the least degree, in pursuit of its goals (the “necessity” step). Thus, if I – as the system designer – have before me two design choices (say, centralised and decentralised), and I choose the one that enables or facilitates a greater degree of rights violations, then at the moment at which the State approves that design choice, it violates the proportionality standard.

Now of course, a Court may find that the choice of a centralised system does not violate proportionality. The point, however, is that a Court cannot avoid engaging with – and answering – that question. To do so is to implicitly endorse the State’s choice of design, and, by implication, take design questions out of the purview of constitutional reasoning altogether. Therefore, when the High Court itself noted just after declaring that it would not be looking at design, that it would be “confining” itself with issues of “privacy and data protection” (paragraph 876), it necessarily followed from that that it would have to deal with issues of design as well: because it could not deal with privacy and data protection without factoring in how the choice of design impacted both issues. In such a situation, to abstain would amount to an abdication of the judicial role.

Purpose Limitation

Secondly, it is respectfully submitted that the Court misapplied the requirement of purpose limitation. To put it very simply, purpose limitation – in the context of data protection – requires that information collected be used only for the purpose for which it is specified, and nothing beyond. Petitioners had argued that as NIIMS was entirely open-ended, and did not specify what the information was going to be used for, purpose limitation had been violated. To this the Court responded that the purpose was “verification”, and therefore, there was no violation (paragraph 787).

This, however, is flawed. Let me explain with the help of a hypothetical. Suppose I am a police officer, and I go to the Magistrate for a warrant to search a specific house. The Magistrate asks me, ‘what is your purpose in searching this house?’ I answer: ‘to find out what is inside.’ If the Magistrate has any sense, he will refuse the warrant. The point is simple: if “purpose” is defined in the very terms of the activity itself, then all you get is a tautology. ‘Why have you jailed this person?’ ‘To deprive them of liberty.’ ‘Why are you collecting identifiable biometric data?’ ‘To identify people.’ etc.

Purpose limitation, therefore, is not satisfied by holding that identifying data is being collected with the purpose of identifying people: the correct question is what are people being identified for. In the Indian context, for instance, there were a set of defined purposes for which Aadhaar was supposed to be used as an identifier, that were set out in legislation (the efficacy of that is something I will not get into here): accessing government subsidies, banking, buying a SIM card, and paying taxes. When we look at it this way, we also see another reason why purpose limitation is important: there needs to be an opportunity to challenge the collection and use of biometric data with respect to the specific purpose that it is being put to. In the Aadhaar case, for example, the Supreme Court found that it was proportionate for accessing subsidies and paying tax, but disproportionate for buying SIM Cards and opening bank accounts. A general, open-ended “purpose” of identification (as is set out in the NIIMS statutory framework) would have made these specific challenges impossible.

The “purpose”, therefore, has to be set out in concrete terms: why, specifically, is this data being collected, and what specific use is it going to be put to? With respect, the High Court’s framing of the issue betrayed the very assumptions that would lead it to the wrong answers.

Conclusion

The judgment of the High Court of Kenya provides us with a strong and well-reasoned analysis of the NIIMS framework, and has some important findings: in particular, on the strong privacy interests in biometric data, as well as the necessity to implement data protection laws before taking on a nation-wide data collection exercise. That said, on issues of design and purpose limitation, the High Court’s conclusions may need reconsideration. And on a third set of issues (the data protection framework itself), the field remains open. What is certain is that this is only the first salvo in a long struggle, and the progress of this case through the Kenyan courts will be fascinating to watch.


(Disclaimer: The author provided research assistance to some of the petitioners in this case).

Notes from a Foreign Field: The Dutch Court on Privacy, Surveillance, and Welfare Fraud

In an interesting judgment (in Dutch; use Google Translate) delivered by the Hague District Court earlier this week, the Dutch government’s Risk Indication System [“SyRI”] was found to violate the European Convention on Human Rights (read summary). SyRI was an algorithmic system designed to “prevent and combat fraud in the field of social security and income-related schemes, tax and social insurance contributions and labor laws.” It involved using data held by various government departments, and linking it in order to generate a “risk report.” The data then collected would be processed against certain “risk indicators”, and according to a “risk model”, in order to produce the “risk report”. And the “risk report”, in turn, would flag a person as potentially involved in possible fraud (in relation to access to welfare benefits, tax payment etc.), and a possible subject of further investigation. That data that could be processed included information about work, taxes, property ownership, trade, demographic details, and multiple other categories. Therefore:

Collection of Data from Different Government Departments —-> Encryption and Pseudonymising of Data —-> Processing of Data against Risk Indicators and through the Risk Model (First Phase) —> Decryption and De-Anonymising of Data that is Flagged as an “Increased Risk” —-> Further Analysis —–> Preparation of Risk Report —-> Potential further action on the basis of the Risk Report.

The enabling legislation that authorised SyRI was challenged by a group of petitioners and privacy organisations, on the now-familiar grounds of privacy and discrimination. The State defended it on the – also familiar – ground that it was of overriding importance to identify fraudulent claimants, so that the welfare system could continue functioning.

The Framework

As we have seen on multiple occasions by now, the outcomes of such cases – involving dense factual disputes – often hinges upon the initial analytical framework set out by the Court. In this case, the Court began by setting out two important principles. First, it noted that the “development of new technologies also means that the right to the protection of personal data is increasingly important.” An implicit rebuke to the “move fast and break things” school of technological utopianism, the Court emphasised that the greater the scope for technology-enabled intrusion into private life, the greater the importance that must be given to issues of privacy and data protection. Secondly, the Court set out in advance that whatever the stated benefits of SyRI in combating welfare (and other kinds of) fraud, its deployment would have to comply with the data protection standards of transparency, purpose limitation, and data minimisation (see paras 6.31 – 6.33) in a “clear and verifiable way.” This put the burden firmly upon the State in establishing compliance with these fundamental principles, an ensuring, in turn – as we shall see – that unresolved factual disputes would mean a verdict in favour of the citizen and against the State, rather than the other way round.

The Analysis 

The Court began by noting that:

…the State has not made the risk model and the indicators that make up or may consist of the risk model public. In these proceedings, too, [it] did not provide objectively verifiable information to the court in order to enable it to test the views of the State on what SyRI is. The reason given by the State for this is that citizens could adjust their behavior accordingly. This is a conscious choice by the State. (paragraph 6.49)

In this context, the Court noted that the SyRI process involved the use of large data sets (from different sources), their inter-linking, and the potential use of data mining and predictive analysis. The linking of data made it a clear case of profiling (indeed, both parties were in agreement on this). The Court therefore held that while no civil or legal consequence immediately flowed from the preparation of a risk report, it did nonetheless “have a significant effect on the private life of the person on who the report relates to.” Article 8 of the ECHR (the right to privacy) was therefore attracted.

In sum, therefore, SyRI involved profiling individuals on bases that were “secret”, in a manner that impacted their right to privacy, and whose results were not communicated to them. The question then was whether this interference with the right to privacy could be justified as being “necessary in a democratic society.”

During the course of this – essentially – proportionality analysis, the Court accepted the government’s contentions that tackling welfare fraud was both a legitimate State purpose, and a pressing social need. However, it went on to find that SyRI violated all three prongs – transparency, purpose limitation, and data minimisation – of the data protection principles. On the first prong, the Court observed that the “legislation in no way provides information about the factual data that can justify the presence of a particular circumstance, or which objective factual data can justifiably lead to the conclusion that there is an increased risk.” In other words, both the indicators and the risk model – as observed above – were secret. Thus, the Court held:

…it is impossible to check how the simple decision tree that the State is talking about comes about and what steps it consists of. It is thus difficult to see how a person concerned can defend himself against the fact that a risk report has been made with regard to him or her. It is equally difficult to see how a data subject whose data have been processed in SyRI, but who have not led to a risk report, can be aware that his or her data has been processed on the right grounds. (paragraph 6.92)

This, the Court found, was a clear breach of the principle of transparency. The Court did not – as certain other Courts might have done – ask for the indicators and the risk model in a “sealed cover”, so that it could consider their validity for itself. Rather, it straightforwardly held that consequential State decisions involving a violation of the right to privacy could not be made in a non-transparent fashion.

The Court also went on to note that transparency was crucial because of “the risk that (unintended) discriminatory effects will occur.” In fact, relying upon a report submitted by the UN Special Rapporteur for Extreme Poverty, the Court noted that “areas that are already known as problem neighborhoods are being investigated further. As a result, the chance that irregularities are found is higher than in other neighborhoods, which in turn confirms the image of a problem neighborhood, encourages stereotyping and a negative image of the residents who live in the neighborhood is reinforced, even though there is no question of them. risk reporting” (paragraph 6.92). This, of course, is a standard issue with all such algorithmic procedures: recall that it has repeatedly been found, for example, that the use of DNA databanks in crimefighting has a discriminatory effect, as the the composition of the databank is already likely to overrepresent marginlised populations (the known example is that of black people in the USA) – thus leading to a greater chance of false positive, false negatives, and wrongful convictions of members of those populations.

Crucially, in light of this, the Court found that “it is not possible to assess whether this risk [i.e., of discrimination] has been adequately addressed, in the absence of verifiable insight into the risk indicators and the (operation of) the risk model, including the method of analysis by the Inspectorate SZW” (paragraph 6.94). This is a crucial point, that takes us back to the issue of framing. Here – as in other similar cases, such as the litigation around Aadhaar in India and the Huduma Namba in Kenya, the Court had to deal with an information gap on a crucial issue (in this case, the non-disclosure of the indicators and the risk model). Now, there are two ways a Court can respond to this: first, to say that as these are issues of technological design, they are not fit for judicial review, and that therefore, in the absence of adequate information, they will presumptively be decided in favour of the State, in accordance with the principle of judicial deference. The second way, however – and this is how the Dutch Court responded – is to say that given that crucial rights are involved, and given the well-known potential of exclusion and discrimination that comes with such algorithmic systems, the onus lies upon the State to affirmatively demonstrate that there is no violation of rights. This is the role played by the data protection principles of “transparency and verifiability”, and this is how the Court was able to conclude that:

without insight into the risk indicators and the risk model, or at least without further legal safeguards that compensate for this lack of insight, provides insufficient guidance for the conclusion that with the use of SyRI the interference in private life in the light of the abuse and the fraud that is intended to combat is always proportional and therefore necessary. (paragraph 6.95)

The Court then went on to hold that the principles of data minimisation and purpose limitation had also not been complied with. This was because there was no provision for a “pre-test” to determine “whether the interference in private life due to all files linked to that project is necessary, proportionate and subsidiary in view of the specific objective of that project.” (paragraph 6.99)

Conclusion

Questions involving the interface of technology and fundamental rights will – as discussed above – necessarily take Courts into the technical domains. In such situations, the easy thing for the State to do is to engage in information asymmetry, withhold key factual details, and claim that the Court is neither competent, nor authorised, to get into question of technological design. Giving in to these arguments by invoking judicial deference is an easy answer for the Court as well.

The easy route, however, is not the correct one. This is because when you are dealing with technology and fundamental rights, issues of design are crucial: the manner in which a technological system will affect fundamental rights is dependent upon how it is designed (is the data collection centralised? Federated? Can information silos be linked? etc.) Therefore, adjudicating upon issues of design is unavoidable: even when the Court is refusing to engage with those questions, its very non-engagement is an active decision that then grants to the State a kind of impunity to violate fundamental rights by not being constrained in the design of the systems that it deploys.

In that context, the judgment of the Hague Court is crucial, because it squarely takes the bull by the horns, and categorically holds that the absence of information is a reason to hold against the State. This, as we have seen, is possible only with a certain kind of framing: a framing that neither takes technological utopianism at face value, and nor does it take the role of technology in the State’s pursuit of “social goals” such as fraud prevention as an unmitigated good. Rather, it is a framing that takes seriously the potential of technology to violate rights – both privacy and equality – and insists that it is the role of the State to ensure that those concerns are adequately addressed before it can proceed with its project of harnessing technology into the service of its goals. This, I would submit, is far truer to the judicial role of safeguarding fundamental rights.

 

The Bombay High Court on Illegal Surveillance

In a judgment handed down earlier today, a two-judge bench of the Bombay High Court found that the Central Bureau of Investigation’s [“CBI”] surveillance of a businessman accused of paying bribes was illegal. The Court quashed the surveillance orders, and directed destruction of the call records. The judgment is important for three reasons: (i) it is an model of principled constitutional adjudication, following both the letter of the law and applicable precedent; (ii) it extends the logic of the Supreme Court’s privacy judgment [“Puttaswamy”] in the context of unconstitutionally obtained evidence; and (iii) it provides a genuine and enforceable remedy upon a finding that fundamental rights have been breached.

Principled Constitutional Adjudication

The facts of the case were simple. There were three surveillance orders (29th October 2009, 18th December 2009, and 24th February, 2010), which constituted the legal bases for the interception of the petitioner’s phone calls. The petitioner argued that these orders were both ultra vires the Telegraph Act, as well as the Puttaswamy judgment. As the call records had been used in the charge-sheet against the petitioner, it was also argued that those records should be destroyed, and could not be part of the case against him.

The Bombay High Court (Jamdar and More JJs) went straight to the text of the Telegraph Act. Justice More noted that Section 5(2) of the Act made it clear that telephone interception was permissible only under one of two circumstances: the existence of a public emergency, or a threat to public safety. This was obviously not a public emergency, so the key phrase was “public safety.” More J. then referred to the judgment of the Supreme Court in PUCL v Union of India (1997), where “public safety” had been clearly defined as “the state or condition of freedom from danger or risk for the people at large… [that would be] apparent to a reasonable person.” He then noted that the PUCL judgment had also set out procedural safeguards in cases of surveillance, that were to be treated as enforceable rules under Section 5 of the Telegraph Act.

More J. went on to observe that the judgment in PUCL had been endorsed by the nine-judge bench of the Supreme Court in Puttaswamy. This also meant that contrary precedent in R.M. Malkani v State of Maharashtra – that had held that telephone interception did not violate any fundamental rights – now stood overruled. Puttaswamy had also held that any restriction upon the right to privacy had to satisfy the proportionality standard. More J. then held that the directions in PUCL conformed to the proportionality standard, and were evidently binding upon the Bombay High Court.

Applying the law to the facts, More J. found that it was abundantly clear that there was no threat to “public safety”, following PUCL’s definition. Therefore, “it was impermissible to take resort to telephone tapping.” (para 16) He noted that “even at this stage, from the affidavits filed by the Respondents or the charge-sheet, the Respondents could not justify any ingredients of risk to the people at large or interest of the public safety.” (para 17) Consequently:

We are satisfied that in peculiar fact of the instant case, the impugned three interception orders neither have sanction of law nor issued for legitimate aim, as sought to be suggested. The impugned three interception orders could not satisfy the test of “Principles of proportionality and legitimacy” as laid down by the nine judges’ constitution bench decision in K. T. Puttaswamy (supra). We, therefore, have no hesitation in holding that all three impugned orders are liable to be set aside. Accordingly, we quash and set aside the same. (para 19)

Now this may look like a straightforward application of law – and it is – but there are two points that I want to highlight here. The first is the unambiguous understanding of the legal standard laid out in Puttaswamy, and its application. Puttaswamy makes clear that in cases where fundamental rights are infringed, it is not enough for the government to cite law and order, and for the Court to engage in an undefined balancing exercise (that somehow ends up being resolved in favour of the government). Rather, Puttaswamy’s proportionality standard is a four-step test that requires, inter alia, government to show that there is a legitimate aim, that the infringing action is rationally connected to that aim, that it is necessary (i.e., the least restrictive option to achieve that aim), and that it is proportionate (i.e., the benefits outweigh the costs of infringing rights). The Bombay High Court makes it clear that the government cannot avoid justifying its actions under that standard.

Secondly, in recent times, we have seen far too often that courts have not only given such overriding importance to the government’s stated goals (“public interest”, “national interest”, “national security” etc.), but have also let the government define what the content of these phrases is. This has been most prominently visible in the Kashmir Cases (discussed on this blog), where the invocation of “national security” has acted as an impenetrable shield against any further judicial scrutiny. But it has also been visible elsewhere – in the courts’ recent jurisprudence on anticipatory bail, for example, or in its jurisprudence on bans upon organisations, bail under the UAPA, the concept of “custodial interrogation”, and of course the Supreme Court’s recent use of Article 142 to provide legal cover to the compulsory taking of voice samples.

In short, courts have increasingly become fixated on the governmental imperatives of law and order, treating procedural rights and safeguards as nuisances that need to be swept aside in order to ensure that “public interest” is satisfied. In that context, the Bombay High Court’s decision to straightforwardly apply the law as well as constitutional precedent, leading to the inescapable conclusion that surveillance was illegal, is most refreshing. Notice that the Court could have decided the other way as well: it could have held, for instance, that corruption is so grave a problem that adequately prosecuting it is indeed in the “interests of public safety.” This, of course, would have twisted the meaning of “public safety” out of all recognition – but it is not a reasoning technique that we are unfamiliar with (recall, for example, the Supreme Court’s interpretation that “migration” fell within the meaning of “external aggression”, in Sarbananda Sonowal’s Case).

Instead, the Bombay High Court’s judgment is a reminder that the “balance” between governmental goals and individual rights has already been struck in the text of statutes (providing procedural safeguards) and in constitutional decisions that lay out doctrines like proportionality. The job of a court now is to straightforwardly apply text and precedent, and stand by the legal conclusion that follows from that application.

The Remedy

Equally important is the Court’s remedy. The Bombay High Court noted that in PUCL, it had been made clear that if the Review Committee found that telephone interception was illegal, copies of the intercepted material would have to be destroyed. This had also been adopted by Rule 419(A)(17) of the Telegraph Rules. Consequently, More J. noted:

… having held that the impugned interception orders have been issued in contravention of the provisions of section 5(2) of the Act, we have no option but to further direct the destruction of intercepted messages. (paragraph 21)

The Court also pointed out that the CBI itself had been taking conflicting stands in court about the legal foundation of the orders, and the actions of the Review Committee – a practice that the Court strongly deprecated (paragraph 28). More J. found, in addition, that the successive interception orders were carbon copies of each other, and had evidently been passed without application of mind – a clear breach of the statute.

Unconstitutionally Obtained Evidence

The Respondents argued, however, that even if there had been illegality in the collection of the evidence, they should be entitled to use it in the course of the criminal prosecution. They relied upon several judgments that had held that as long as evidence was relevant, it could be introduced in a trial, regardless of the legality of how it was obtained.

The Court’s response to this argument is perhaps the most remarkable part of the judgment. After distinguishing the relevant precedent (including the judgment in Pooran Mal, More J held:

We may also add here that if the directions of the Apex Court in PUCL’case (supra) which are now re-enforced and approved by the Apex Court in K. T. Puttaswamy (supra) as also the mandatory rules in regard to the illegally intercepted messages pursuant to an order having no sanction of law, are permitted to be flouted, we may be breeding contempt for law, that too in matters involving infraction of fundamental right of privacy under Article 21 the Constitution of India. To declare that dehorse the fundamental rights, in the administration of criminal law, the ends would justify the means would amount to declaring the Government authorities may violate any directions of the Supreme Court or mandatory statutory rules in order to secure evidence against the citizens. It would lead to manifest arbitrariness and would promote the scant regard to the procedure and fundamental rights of the citizens, and law laid down by the Apex Court. (paragraph 39)

This is a hugely important holding. I have argued elsewhere that in Selvi v State of Karnataka, a three-judge bench of the Supreme Court had drawn a crucial distinction between illegally obtained evidence (admissible if relevant) and unconstitutionally obtained evidence (inadmissible under all circumstances). Evidence obtained in breach of fundamental rights, in other words, could not then be used in court against the citizen. Although the Bombay High Court did not cite Selvi, it did cite Puttaswamy (which endorsed Selvi), and more importantly, it provided a strong rationale for this: that to hold on the one hand that a certain method of collecting evidence was unconstitutional, while also allowing the State to use the evidence so collected, would not only be arbitrary, but would also ensure that fundamental rights and individual safeguards remained parchment barriers against the State.

Conclusion

In each of its three conclusions – on illegality, on the remedy, and on evidence – the Bombay High Court, as I have shown, engaged in a textbook application of law and precedent. It faithfully applied existing standards balancing individual procedural safeguards against the imperatives of law and order, found that under those standards surveillance was clearly illegal, and then went through with the consequences that followed. What makes it remarkable, of course, is the number of recent judgments that appear to have given a go-by to these standards by invoking the imperatives of the State.

It is also remarkable, I think, for another reason: the Puttaswamy judgment was hailed as a new dawn for constitutionalism (Constitutionalism 3.0, as someone commented) precisely because of its categorical assertion that the individual was at the heart of the constitutional order, that infringements of individual rights must be limited to the strict confines of what was constitutionally permitted, and must be justified as such by the State. Puttaswamy was not just a simple finding that privacy was a fundamental right under the Constitution, but the reasoning of the nine judges brought hope that it would lay the foundation for transforming how civil rights adjudication happens: adjudication that would stop treating – as K.G. Kannabiran once said – the “restrictions” as fundamental and the rights as contingent, and that would stop providing automatic normative priority to the reasons of State. And at the time of Puttaswamy, I had written that:

But we must all be equally clear about the fact that the real task will begin now: it will begin with the first bench that is asked to apply Puttaswamy to a concrete case where privacy runs up against reasons of State, and it will continue in the months, years, and decades to come. The task is not simply to apply Puttaswamy, but to use Puttaswamy to craft a genuinely progressive civil rights jurisprudence, where the original constitutional compact – that individual rights are not subordinate to “public good”, “social good”, “public interest” (or any other variant of the phrase) – is restored. And that, now, is the responsibility of citizens, of lawyers, and of course, of the judges who will be called upon to adjudicate privacy and liberty claims in the wake of this judgment. For judges, indeed, it is a challenge: to be true to the animating spirit of Puttaswamy, and make the hard decision to tell the State that although its aim may be laudable, its motives unimpeachable, and its method beneficial, under the Constitution of India, it nonetheless cannot have what it wants.

I think it is fair to say that the last few months have severely tested this cautious optimism, expressed more than two years ago. From that perspective, the Bombay High Court judgment was beautiful to read, because it indicated the difference that Puttaswamy can and could make, and it provided a glimpse of Puttaswamy’s promise to initiate a transformative civil rights jurisprudence, a promise that had so enchanted us at the time.

The enchantment may have worn off, but the hope remains!

 

Notes from a Foreign Field: The South African High Court on Surveillance and Privacy

In a landmark judgement delivered yesterday (Amabhungane v Minister of Justice), the North Gauteng High Court in South Africa ruled large parts of the South African surveillance law unconstitutional. With this progressive ruling, South Africa joins the European Court of Human Rights in requiring surveillance regimes to conform to strict standards of proportionality and of procedural and substantive safeguards, designed to protect the right to privacy.

The RICA

At issue before the High Court was the Regulation of Interception of Communications Act [“RICA”]. The RICA set out the legal framework facilitating and regulating State surveillance in South Africa. It required surveillance requests to be authorised by a retired judge (“the designated judge”), and the judge, in turn, had to submit an annual report to Parliament; surveillance requests had to be highly detailed and specific, setting out the identity of the target, the grounds for surveillance (which were limited to serious offences and threats to national security), the gravity of the suspected offence, and disclosing (among other things) whether less intrusive methods had been tried and found not to be feasible, the proposed period of surveillance, and so on. These factors had to be taken into account by the designated judge, before deciding whether or not to grant a surveillance request. Surveillance requests could be real time, as well as requests for accessing past communications, stored by telecom companies.

It will be immediately noted that the RICA was already cognisant of privacy, to an extent greater than we ordinarily see in legislation (or even judicial opinions). For example, the requirement that surveillance requests must be approved by a judge is already two steps ahead of the Indian Supreme Court’s judgement in PUCL v Union of India, where an identical argument was rejected, and bureaucratic control of surveillance decisions was deemed to constitute an adequate safeguard of constitutional rights. Similarly, the highly specific justifications for surveillance went some way towards fulfilling the constitutional requirement of necessity and proportionality, and having that requirement adjudicated by a judicial mind. Therefore, the constitutional challenge in South Africa was already beginning from a (relatively) high baseline of privacy protection.

In this context, the applicants made further challenges to the statute, arguing that even in its present form, it was insufficiently protective of privacy (and other constitutional rights). Before examining the challenges, it is important to note, also, the analytical framework that the Court brought to this case. The Court began by noting that “prohibition of interception is the point of departure, reflecting the privacy norm embodied in Section 14 of the Constitution … [but sometimes] exceptions to the prohibition are created for law enforcement officers and security officials.” (paragraph 29) In its analysis of RICA, the Sutherland J. then went on to note that:

What does a qualitative assessment of RICA yield? The value of privacy is privileged and expression is given to the idea that where exceptions to respect for privacy are to be allowed, a high threshold of justification is stipulated. Self-evidently, to trespass into the private realm is permissible only to the extent that a superior claim to do so can be made out on grounds of necessity. This implies that other means to achieve the aims of the interception would have been ineffective, and the gravity of the circumstances outweighs the primary value of privacy. The safeguards model recognises the need for an independent authority to approve interceptions. This model, in which the person desiring the interception is distinct from the person authorising it, is designed to prevent, as far as possible, abuse of the system. Self-evidently, the approving authority’s efficacy in achieving this aim is dependent on the information made available to that authority, ie the designated judge. (paragraph 35)

This analytical framework is important, because it forms the backdrop of the analysis that would follow. The idea that privacy is the default and interception the exception is crucial, because it is this framing that puts the burden of justification upon the government. It is the government that has to show that other means would be ineffective (the requirement of necessity), that the interests are overriding (proportionality), and that the procedural safeguards (independent checking authority) are in place. And this framing also ensures that where there are interpretive gaps or ambiguities, those ambiguities (as we shall see) will be resolved in favour of the individual, and against the State. These may all sound very basic principles, but in an age where (as K.G. Kannabrian reminded us) the restrictions – instead of the rights – have become “fundamental”, their restatement is always important and valuable.

Notice

The Applicants’ first challenge was that the RICA did not provide for notice to the target of surveillance, even after the surveillance had ceased. The requirement of notice, the Applicants argued, was essential to securing access to justice and access to the courts (a guaranteed right under the South African Constitution). This was the only way to ensure accountability (because obviously, notice before surveillance would be literally self-defeating), check the abuse of surveillance, and provide targets with an adequate remedy in case it turned out that the surveillance was abusive.

Drawing from the legal position in the United States, Canada, Japan, and the ECHR, Sutherland J. agreed, holding that “the norm is that unless reasons exist not to give notice, notice shall be given (paragraph 47). Now, there might exist a small subset of cases (such as treason or espionage), where perpetual secrecy was necessary – but that could not be the norm. Consequently, Sutherland J. adopted a remedy that is commonly used in South Africa: he declared existing provisions unconstitutional, suspended the declaration of unconstitutionality by two years to give Parliament time to rectify the defects, and in the meantime, read in a provision into the Act, requiring that the target of surveillance be notified within ninety days after the cession of surveillance (paragraph 53). Sutherland J. also added that in exceptional cases, the designated judged could extend the period to 180 days at a time – but if it went beyond three years, a three-member judicial panel would be required to rule on any further extension.

Independence and Due Process

It should be obvious from the above that a lot would turn upon the designated judge, her character and her independence. Under the RICA as it stood, the designated judge was appointed by the government. The Applicants argued that this compromised independence, a position that – as Sutherland J. pointed out – was not “met with any serious rebuttal” (paragraph 63). The Applicants proposed that the designated judge be appointed by the Judicial Services Commission, that was also responsible for appointments to the higher judiciary. Sutherland J. held, however, that the specifics of the appointment process required greater deliberation; therefore, as an interim measure, he directed that the appointment would be at the nomination of the Chief Justice (paragraph 71).

Equally importantly, the Applicants pointed to the lack of an adversarial process during the grant of surveillance requests; in other words, when a surveillance request was made, the parties present would be the government, and the designated judge. This, it was argued, violated the fundamental rule of audi alterem partem, or the right to be heard. The Applicants therefore proposed a familiar remedy – that a “public advocate” be appointed, who would argue such requests on behalf of the surveillance target (because, naturally, the target themselves could not be informed – see, for example, the proposals in the draft Indian Privacy Code). The government raised a familiar objection – that the designated judge, applying her judicial mind, should be trusted to sift the material impartially and objectively. In response, Sutherland J. noted – drawing from the analytical framework discussed above – that fundamental values are “to be reluctantly trampled on with as light a tread as possible.” (paragraph 74). While he did not read in a specific requirement of a public advocate, noting that it was only one of many possible options to secure a fair hearing and an adversarial process, he nonetheless struck down the relevant provisions of the RICA because of its failure to deal with the problem ex parte orders, and – once again – suspended the declaration of invalidity to allow Parliament to deal with the problem, on the basis of the principles indicated in the judgement.

Archiving and Processing

The Applicants’ third objection was that the RICA provided no safeguards with respect to how the collected data was archived and processed – including with respect to the telecommunication companies, which were required by law to store the data for up to three years. While Sutherland J. declined to find that the period of storage was too long – noting that there could be no bright-line determination in this regard (paragraph 95) – he did hold that the procedural safeguards with respect to storage and access had to be set out transparently in the statute itself, and not by means of secondary legislation (rules and directions) (paragraph 101). Furthermore, he found that there were no provisions dealing with erasure or management of surplus data, or with respect to oversight with regard to the stored data (paragraph 106). He therefore issued a suspended declaration of invalidity with regard to the relevant provisions of the RICA.

Special Exemptions for Lawyers and Journalists

An interesting challenge made by the Applicants was that lawyers and journalists – by their vary nature – required heightened protection from surveillance; lawyers because of attorney-client privilege, and journalists because of the overriding imperatives of protecting anonymous sources. Sutherland J. agreed with the principle, noting in particular the importance of investigative journalism to any free society (the South African Constitution also guarantees specific protection to the press); exceptions were limited to serious cases such as espionage or violations of human rights (paragraph 133). Accordingly, he read into the Act a positive requirement that if the target of surveillance was a lawyer or a journalist, that fact would have to be revealed to the designated judge, who would take it into account while deciding whether or not to grant the surveillance request.

Sutherland J.’s analysis is particularly interesting when we juxtapose it with the Indian Supreme Court’s repeated insistence that the press is on par with every other citizen, and cannot claim any specific privileges under Article 19(1)(a). While the Indian Constitution does not provide explicit protection to the press (unlike the South African), the argument is a red herring: it is nobody’s case that journalists have a special status or additional rights; the argument is that the practice of journalism – an integral element of the freedom of speech and expression, in its avatar of protecting the free flow of information in a democratic society – requires a certain kind of protection if it is to be meaningful at all; and that protection flows from Article 19(1)(a). It is this analysis that shines through clearly in Sutherland J.’s judgement.

The Illegality of Bulk Surveillance

The Applicants’ final argument was that the RICA did not authorise bulk or mass surveillance (i.e., surveillance without probable cause, where dragnets are placed that capture, for instance, all communications using certain keywords, that are later mined for information). In response, the government relied upon another statute – the National Strategic Intelligence Act, which allowed for the gathering, correlating and analysing of intelligence. It was here, once again, that the analytical framework we discussed at the beginning of this post became relevant; because in the absence of specific authorisation of bulk surveillance, Sutherland J. found that it could not be implied into either the NSIA or the RICA (paragraphs 151 & 163). As he noted, “Our Law demands such clarity, especially when the claimed power is so demonstrably at odds with the Constitutional norm that guarantees privacy.” (paragraph 163) He did not, however, pronounce on the constitutional validity of bulk or mass surveillance per se.

Conclusion

The minutiae and details of Sutherland J.’s judgment are, of course, fascinating to privacy scholars and lawyers. There are, however, a few broader points to take home. The first is his clear insistence that privacy is the default and surveillance the exception, and then actually following through by applying that framework in adjudicating the constitutionality of specific surveillance provisions. And flowing from this is a particularly important point: Sutherland J. did not write an activist judgement, with free-wheeling claims about the unconstitutionality of surveillance in general, or prohibiting all forms of surveillance; in this, he was appropriately deferential to the law-making authorities. Deference, however, does not mean subservience, or writing a blank cheque to the government under the guise of national security or public safety. Rather, Sutherland J. required the government to explain and justify the need for each impugned provision of the RICA; and where the government was not able to provide a satisfactory explanation (and indeed, as evident above, many of those issues involved little more than pure common sense!), the provision was struck down or read down. Under the garb of deference, therefore, Sutherland J. did not hide from his duty to examine surveillance provisions under constitutional standards of necessity and proportionality, and to ensure that even the surveillance authorities were bound by the rule of law. This is particularly important for us, in a time where India’s surveillance infrastructure is opaque, its legislative foundations unclear, the constitutional status of bulk surveillance unclear, constitutional challenges pending, and a Court that increasingly uses “national security” as a mantra to flee the field of adjudication. Sutherland J.’s judgement is a timely reminder of how courts ought to work in holding the executive to account under a constitutional framework/

The Aadhaar Judgment and Reality – III: On Surveillance (Guest Post)

(This is the third and penultimate essay in Anand Venkat’s four-part series examining the factual foundations of the Aadhaar judgment.)

In our attempt to further decode the factual errors in the Aadhaar judgement, it is worth asking an important question: why is genuine engagement with contradictory facts very hard? Charlie Munger wrote about 24 causes of misjudgement in 1995 and, not surprisingly, simple psychological denial comes up as number 2 in that list.

In this post, we will argue that simple denial alone can’t explain the Majority’s inability to deal with aspects of the petitioners’ challenge. Technological illiteracy is a factor as well.

Surveillance

What is surveillance? If a policewoman tags along a person, at all times, then it fits the definition of surveillance, because she knows all about the person. Now what if, instead of a policewoman, a recording device is always present? That too fits the definition. So surveillance is not simply someone knowing about a person, but having the capability to know all about her, and actively using that capability.

How is it possible for someone to have the capability to know all about a person ? The answer is “Body Tagging”. If every activity that the person ever does in her life can be reliably attached to her body, a detailed profile can be built about the person, which enables surveillance. Put simply, profiling is surveillance.

Body tagging a person’s life across multiple activities can be easily done, if the “body” is given a unique number, and the unique number is attached to every activity. Thus, if a technological means called “Aadhaar” can produce perfect “unique numbers” that are attached to a body, as the Majority judgement states in paragraph 55 – “when it comes to obtaining Aadhaar card, there is no possibility of obtaining duplicate card” – then mass  surveillance is a logical corollary, if it is attached to other databases.

Surveillance: The absence of factual engagement

The Majority, however, chooses not to engage with the petitioners’ submissions about how the body tagging of persons, across multiple databases, is surveillance. It was brought to the court’s notice that many states have built “State Resident Data Hubs [“SRDHs”], which have body tagged sensitive personal details of their residents, available in multiple silos, and have merged them into one “golden” record. For instance, the state of Andhra has gone further than most and built star-trek dashboards, that display the intimate personal details of 43 million of the state’s 50 million residents: GPS coordinates of their homes, the medicines they use, the food rations they eat, what they say about the Chief Minister on their social media accounts, real-time feeds of thousands of security cameras (with some cameras inside people’s homes – voluntarily, of course), their castes and sub-castes, their religion, their student scholarships and old-age pensions, their movement in every state ambulance, and of course — their Aadhaar numbers.

The Majority avoids engaging with the argument because if it did, it would result in arriving at the same conclusion that Chandrachud J arrives at, in his dissenting opinion, that the technological design of the project actually subverts the Aadhaar Act – actually enabling profiling through surveillance – and hence cannot stand: 

When Aadhaar is seeded into every database, it becomes a bridge across discreet data silos, which allows anyone with access to this information to reconstruct a profile of an individual’s life. It must be noted while Section 2(k) of the Aadhaar Act excludes storage of individual information related to race, religion, caste, tribe, ethnicity, language, income or medical history into CIDR, the mandatory linking of Aadhaar with various schemes allows the same result in effect. For instance, when an individual from a particular caste engaged in manual scavenging is rescued and in order to take benefit of rehabilitation schemes, she/he has to link the Aadhaar number with the scheme, the effect is that a profile as that of a person engaged in manual scavenging is created in the scheme database. The stigma of being a manual scavenger gets permanently fixed to her/his identity. What the Aadhaar Act seeks to exclude specifically is done in effect by the mandatory linking of Aadhaar numbers with different databases, under cover of the delivery of benefits and services. (Chandrachud J., dissenting, paragraph 274)

Surveillance: Internal contradictions

Instead of focussing on body tagging across various databases, the Majority instead focuses only on the surveillance potential of the “Metadata” stored in the CIDR. This leads to logically contradictory observations.

For instance, if Facebook and Google, can know the places where one has shopped and also know the movies that one watched, they already have “data.” But the Majority then went on to make the very bizarre claim that in Para 160 that “data” can turn into “meta-data”! All this data is there with the companies in respect of its users which may even turn into metadata.”

In the very same paragraph, it makes the further astonishing claim that OTPs are sensitive personal information in the same manner as biometrics:

Every transaction on a digital platform is linked with some form of sensitive personal information. It can be an individual’s user name, password, account number, PAN number, biometric details, e-mail ID, debit/credit card number, CVV number and transaction OTP etc.

The Majority’s inability to understand technology also becomes clear from its discussion on “Authentication log retention”, in paragraph 260.

We do not find any reason for archiving the authentication transaction data for a period of five years. Retention of this data for a period of six months is more than sufficient after which it needs to be deleted except when such authentication transaction data are required to be maintained by a Court or in connection with any pending dispute. Regulations 26 and 27 shall, therefore, be amended accordingly. (paragraph 260)

Let us deconstruct the ruling here carefully. The judgement says that

  1. Authentication transaction data needs to be deleted after six months.
  2. But not if there is any pending dispute or ordered by a court.

By doing so, it restricts the time frame, in which a dispute could arise to “only six months”. This has ramifications for biometric fraud disputes such as Gujarat biometric data trade, where biometrics of legislators was sold en-masse for 7 lakh rupees and the Airtel LPG routing scam, as these scams went on for months before detection. But the Majority, through its ignorance of technology, ensures instead that it would not be possible for law enforcement to investigate such cases, through limiting the retention of metadata.

And here’s the contradiction: after all, if surveillance is indeed impossible and far fetched as was stated in para 197 (“… therefore, the threat to real-time surveillance and profiling may be far-fetched...”), why would long term retention of authentication logs would be an issue at all? The reading-down is both inexplicable and logically incoherent.

Addendum: Direct Benefit Transfer

There are three important pillars in Direct Benefit Transfer (DBT).

  1. The Aadhaar number
  2. Mobile
  3. Bank Account

In the earlier rollout of DBT, the various schemes’ databases merely collected the bank account numbers or the post office savings bank account numbers of the beneficiaries. There was simply no need for either Aadhaar numbers or mobile numbers. However, once mandatory biometric authentication was introduced as a pre-condition for DBT, the situation changed drastically.

Biometric authentications are always fallible and the Majority’s refusal to engage with that simple technological fact, does not change the reality. Hence the only recourse is OTP authentication via the linked mobile phone. However, UIDAI does not verify the phone, during enrolment, which makes OTPs ineffective. The only “technological hack” then available for UIDAI to avoid biometric exclusion is Mobile linking.

Ever since National Payment Corporation of India (NPCI) introduced the Aadhaar mapper, which links bank accounts to Aadhaar numbers, central and state departments no longer collect beneficiary bank accounts, and instead use the NPCI Mapper to do Direct Benefit Transfers. Hence, for DBT via NPCI to work, seeding Aadhaar numbers into bank accounts was essential.

The court does not engage with the technological aspect of this ecosystem, when it rules that both Mobile and Bank linking are unconstitutional, and strikes them down. So in effect, without perhaps intending to, it has also brought the current Aadhaar-based DBT ecosystem, where NPCI and banks are important players, to a grinding halt. Further, it  has only worsened the exclusion problem caused by fallible biometric authentication, by removing the OTP option.

Conclusion

The factual and logical contradictions outlined so far, lead one to conclude that the Majority has not understood that technological progress is making the law irrelevant. As Lawrence Lessig pointed out:

Every age has its potential regulator, its threat to liberty. Our founders feared a newly empowered federal government; the Constitution is written against that fear. John Stuart Mill worried about the regulation by social norms in nineteenth-century England; his book On Liberty is written against that regulation. Many of the progressives in the twentieth century worried about the injustices of the market. The reforms of the market, and the safety nets that surround it, were erected in response.

When faced with a civil liberties case, that is second longest in the history of the court, the least that the Majority could have done was to engage with the facts and the new emerging technological domain of cyberspace, and how it could make constitutional rights irrelevant. As Lawrence Lessig again points out:

Cyberspace will change from a place that protects anonymity, free speech, and individual control, to a place that makes anonymity harder, speech less free, and individual control the province of individual experts only.

By obstinately refusing to engage with the factual and technological aspects of the Aadhaar project, and how the architecture of the project nullifies the very Aadhaar Act that it upheld, the Majority has only demonstrated its own ignorance of technology, and has probably accelerated the Supreme Court’s own irrelevance, as Lessig had proclaimed so boldly.

The Supreme Court’s Right to Privacy Judgment -III: Privacy, Surveillance, and the Body

(Author’s Note: This is an adapted version of a piece first published today morning in the Business Standard.)

In the previous two essays, I discussed the conceptual foundations of the right to privacy judgment, as well as the theoretical underpinnings of privacy endorsed by the Court. in the next three essays, I shall consider the three aspects that the judgments identify at the heart of the concept of privacy: the body, personal information, and decisional autonomy.

First, the body. As Justices Sachs and O’Regan of the South African Constitutional Court observed, the “inviolability and worth of the human body” is central to any formulation of privacy. The concern for bodily integrity comes through most clearly in Justice Chelameswar’s opinion, when he speaks about “freedom from unwarranted stimuli” (para 36), corporal punishment, and forced feeding (para 38). The phrase “unwarranted stimuli” recalls the 2010 judgment of the Supreme Court in Selvi vs State of Karnataka where, in striking down police interrogation techniques such as narco-analysis and brain-mapping, the Court provided its most detailed and considered analysis of bodily (and mental) privacy under Articles 20(3) and 21 of the Constitution. I have examined the judgment in Selvi elsewhere, and will not rehearse the arguments here, apart from noting that, in the operative order of Court, Selvi stands affirmed as correctly decided.

Privacy’s concern with the body, however, goes beyond direct, physical interference by the State. Consider, for example, what the journalist Glenn Greenwald writes in No Place to Hide, the book that chronicles Edward Snowden’s unmasking of the American mass surveillance regime:

“Only when we believe that nobody else is watching us do we feel free – safe – to truly experiment, to test boundaries, to explore new ways of thinking and being, to explore what it means to be ourselves… for that reason, it is in the realm of privacy where creativity, dissent, and challenges to orthodoxy germinate. A society in which everyone knows they can be watched by the state – where the private realm is effectively eliminated – is one in which those attributes are lost, at both the societal and the individual level.”

Fifty-five years ago, Justice Subba Rao understood this when he penned his powerful dissent in Kharak Singh vs State of UP. Recall that Kharak Singh was about police surveillance of a “history-sheeter”, which included tracking his movements. Striking down the regulations in their entirety, Subba Rao J observed that “if a man is shadowed, his movements are obviously constricted… [the] movement of an automation. How could a movement under the scrutinizing gaze of the policemen be described as a free movement? The whole country is his jail… the petitioner under the shadow of surveillance is certainly deprived of this freedom [the freedom of movement]. He can move physically, but he cannot do so freely, for all his activities are watched and noted. The shroud of surveillance cast upon him perforce perforce engender(s) inhibitions in him and he cannot act freely as he would like to do.”

And:

“Assuming that Art. 19(1)(d) of the Constitution must be confined only to physical movements, its combination with the freedom of speech and expression leads to the conclusion we have arrived at. The act of surveillance is certainly a restriction on the said freedom. It cannot be suggested that the said freedom is also bereft of its subjective or psychological content, but will sustain only the mechanics of speech and expression. An illustration will make our point clear. A visitor, whether a wife, son or friend, is allowed to be received by a prisoner in the presence of a guard. The prisoner can speak with the visitor; but, can it be suggested that he is fully enjoying the said freedom? It is impossible for him to express his real and intimate thoughts to the visitor as fully as he would like.”

At the heart of Justice Subba Rao’s dissenting opinion was the crucial insight that surveillance does not always leave perceptible traces in the physical world. Rather, it works insidiously upon the minds of its targets, channeling their actions, their movements, their associations, and their very thoughts into preset grooves and patterns, killing dissent and heterodoxy, and imposing a stifling, psychological conformism upon society. Nor was this opinion outlandish or extreme: from Bentham’s panopticon to Foucault’s disciplinary regimes, the psychological impact that surveillance wreaks upon the human body has been well-known and thoroughly studied.

The disagreement between Justice Subba Rao and the Majority opinion in Kharak Singh is commonly understood to be a disagreement between the “silos approach” to Part III (flowing from A.K. Gopalan, and upheld by the majority), and the integrated approach that read fundamental rights together (espoused by Justice Subba Rao). However, the disagreement actually went much deeper. The majority judges rejected root and branch Justice Subba Rao’s view that surveillance was constitutionally suspect because of the psychological impact it had upon its targets. According to the majority:

“In dealing with a fundamental right such as the right to free movement or personal liberty, that only can constitute an infringement which is both direct as well as tangible and it could not be that under these freedoms the Constitution-makers intended to protect or protected mere personal sensitiveness….  Learned Counsel suggested that the knowledge or apprehension that the police were on the watch for the movements of the suspect, might induce a psychological inhibition against his movements but, as already pointed out, we are unable to accept the argument that for this reason there is an impairment of the “free” movement guaranteed by sub-cl. (d).  Art. 19(1)(d) has reference to something tangible and physical rather and not to the imponderable effect on the mind of a person which might guide his action in the matter of his movement or locomotion.”

The core disagreement between the majority and Justice Subba Rao, therefore, was not merely an interpretive dispute about how to read the Constitution, but a much deeper and more fundamental clash: was the Constitution’s protection of the human body limited to physical and observable constraints, or did the Constitution protect something more intangible as well – that “realm of privacy where creativity, dissent, and challenges to orthodoxy germinate”?

Puttaswamy’s overruling of Kharak Singh, therefore, opens up a further question. Was Kharak Singh overruled only in a narrow sense – i.e., to the limited extent that the “silos approach” was held to be no longer good law? Or was it overruled in the broader sense, in its very understanding of what fundamental rights are about?

In my view, a close reading of all six judgments makes it clear that the Kharak Singh majority was overruled in the broader sense, and Justice Subba Rao’s dissent was resurrected in its entirety. Consider, for example, Justice Chandrachud’s observation, in paragraph 168 of his opinion:

“Individual dignity and privacy are inextricably linked in a pattern woven out of a thread of diversity into the fabric of a plural culture.”

This is not mere rhetoric. A few lines before this, Justice Chandrachud referred to the individual’s right to “stand against the tide of conformity in creating a zone of solitude.” (para 168) In his separate opinion, Justice Chelameswar warned of “attempts by governments to shape the minds of subjects… conditioning the thought process by prescribing what to read or not to read… what forms of art alone are required to be appreciated leading to the conditioning of beliefs.” (para 37) Justice Bobde defended the idea of a “zone of internal freedom in which to think.” (para 22) And perhaps most bluntly of all, Justice Kaul observed that “knowledge about a person gives a power over that person… privacy is key to the freedom of thought.” (para 52) An overarching cast was given to this by Justice Nariman who, throughout his judgment, referred to Justice Subba Rao’s opinion as one of the “three great dissents” in the history of Indian constitutional law, making it clear that he was referring to – and resurrecting – that dissent in its richest, deepest, and most comprehensive sense.

What all the separate opinions understood was that without privacy, the core constitutional freedoms that we take for granted will cease to be meaningful. Justice Subba Rao’s eloquent description of a shadowed man to whom the entire country was a prison tells us how important privacy is to the freedom of movement. And it is more than that. We will hesitate to associate with unpopular groups espousing unpopular causes, if we know that we are being watched. There is so much that we will not say to each other if we know that, one day, it could be made public. We will self-censor, draw ever more constricting lines in the sand, and suffocate ourselves with our own caution. We will censor others, warning them not to speak to much, be too radical, or think too differently.

On the Kharak Singh majority’s view, none of that is of constitutional concern. But fortunately, that is no longer the law of the land. Five and a half decades after his solitary dissent, Puttaswamy has ensured that Justice Subba Rao’s insight is now constitutional wisdom. As the above observations show, one of the crucial features of the right to privacy judgment is the understanding that democracy is founded on pluralism and diversity, and pluralism and diversity begin in the mind. Privacy is important not because people have something to hide, but because privacy gives people a sanctuary where they can think, be, and live, in opposition to the dominant cultural, social, and political norms of the time. Privacy allows people a space where they can refuse to conform. And it is in that space where liberty flourishes, the liberty to fashion alternative ways of life, which – ultimately – form and take shape in the public sphere, allowing society always to renew itself.

As Meenakshi Arora, one of the senior counsel for the petitioners observed during oral arguments, to live without privacy is akin to living under the shadow of a perpetual, general warrant in your name, issued by the State – a warrant that chills speech, chills association, chills movement, and chills thought; a perpetual warrant that can only produce the kind of conformity that Justice Jackson, the great American judge, likened to the “unanimity of the graveyard.”

In making the body central to the right to privacy, Puttaswamy has performed the crucial role of making cases like Selvi – which were all about limiting the brute, coercive power of the State over the human body – constitutionally secure. But it has also performed an even more crucial role in giving firm, constitutional footing to the challenges to State surveillance, which are bound to be at the heart of the legal battle between the individual and the State in the years to come.

 

ACLU vs Clapper: NSA Surveillance Appeal to be Heard Today

Previously on this blog, we have covered the oral arguments (see here and here) in ACLU vs Clapper, before the Southern District Court of New York. ACLU vs Clapper, of course, is the American Civil Liberties Union’s challenge to the NSA’s bulk surveillance “PRISM” program. We have also covered the decision of the Southern District Court of New York, which rejected the challenge and upheld PRISM, as well as its implications for India (see here). Today, the Second Circuit Court of Appeals will hear ACLU’s appeal from the District Court’s decision. ACLU has the details here, reiterating its Fourth Amendment (privacy) and First Amendment (free speech and association) . As we’ve noted before, both these rights occupy important places in the Indian Constitution as well; consequently, it will be well worth following the progress of this case, as it winds its way through the Appeals Court, and possibly up to the Supreme Court of the United States.

Breaking: ACLU vs Clapper holds Bulk Surveillance Legal – Implications for India

On this blog, we had discussed earlier the oral arguments in ACLU v Clapper. Just now, the New York District Court has ruled bulk surveillance legal, going against the decision of the Columbia District Court in Klayman v Obama (if it wasn’t already, this makes it inevitable that eventually, the United States Supreme Court will be called upon to settle the conflicting lower court decisions).

As we had discussed earlier, ACLU v Clapper consisted of two claims: a statutory one, based on S 215 of the Patriot Act, which is of no concern to us, since no parallel legislation with a similar history exists in India. The second claim was a constitutional one, based on issues of free association and privacy, which is directly relevant to India.

On a quick reading of the judgment the following important points emerge:

– contrary to ACLU’s submissions, the Court held that the 1978 precedent of Smith vs Maryland applied, which had held that an individual had no privacy interest in information voluntarily turned over to third parties (telecommunications providers).  As we have discussed on this blog, the Indian courts have rejected Smith vs Maryland and its precursor, US vs Miller, in the 2004 judgment of Distt Collector vs Canara Bank. Holding that privacy is a right of persons, not places, the Supreme Court affirmed in Canara Bank that an individual has a privacy interest in personal financial documents held by a third party (the bank). [the New York court’s Smith analysis can be found in pages 39 – 43] The Court also holds that the Fourth Amendment lays down a standard of reasonableness, and does not require that the “least intrusive method” be used when carrying out a search within the terms of the Constitution. Again, arguably, the position is different in India. As we have seen, the compelling State interest test for privacy violations goes hand-in-hand with narrow tailoring, as is evident by the rules framed by the Court in PUCL vs UoIand those accepted as constitutional in State of Maharashtra vs Bharat Shantilal Shah, which categorically required the government to explore other, less intrusive methods of surveillance before carrying out interceptions, and also required it to intercept to the minimum extent possible to carry out its goals. 

– The Court also ruled that the argument that bulk collection would have a chilling effect on the freedom of association was not well-founded. To recap: ACLU had argued that the knowledge that is call records were being collected would lead to a “chilling effect” in that it would restrict the communication and association rights of hostile and unpopular (yet legal) groups, who would self-censor in an attempt to avoid governmental knowledge of their activities. The reasoning of the Court appears to be that what was taking place was only collection; actual querying of the metadata to reveal specific information could be undertaken only on specific grounds. Since the likelihood that ACLU’s data itself would be queried and reviewed rested upon an “attenuated chain of possibilities“, the chilling effect had not been proven. In this way, the Court implicitly distinguished prior cases like NAACP vs Alabala, where for instance, a group treated with hostility by the government had been required to reveal its membership lists. Whatever the merits of this argument, once again, the key point upon which it turns is that the NSA surveillance is restricted to metadata collection. Consequently, the logic does not apply to something like the CMS, which is all about intercepting bulk content. [see pages 45 – 46 for the First Amendment analysis]

An extraordinary statement at the end:

The effectiveness of bulk telephony metadata collection cannot seriously be disputed.” [p. 48]

However, as Klayman found, that is precisely what is under dispute. In the fifty-four instances cited by the government, it had failed to demonstrate that the outcome would have been materially different in anyone. (see here for an analysis). In other words, there is a familiar story here: in a national security case, a judge takes the executive’s words at face value, and accords an extremely high level of deference. The Indian courts have an ignominious history in this regard (Habeas Corpus), and it will be crucial how this particular claim is treated in the Indian courts.

The New York court ruling is certainly a blow for privacy rights. Like Klayman vs Obama, Indian privacy lawyers ought to study it carefully, not only because of what it holds, but because of what implicitly follows: if the holding of legality is founded upon legal arguments that have been considered and rejected by the Indian Supreme Court, and upon factual premises directly opposed to those prevailing in India, ACLU vs Clapper might – paradoxically – be more of an ally than an enemy in the fight against bulk surveillance.

 

Surveillance and Privacy in India – VII: Summary

Over the last six posts, we have tried to examine – chronologically – Indian Supreme Court cases dealing with the constitutional right to privacy, with a specific focus on surveillance. Our enquiry has spanned fifty years and many different aspects of law that touch an individual’s personal life – from criminal law practices (police surveillance, narco-analysis, self-incrimination) to phone-tapping, from marital relations to the status of one’s bank records. Despite the diversity of cases and the differing reasoning employed by judges to reach differing results over time, we have seen that a careful analysis reveals certain unifying strands of logic and argument that can provide a coherent philosophical and constitutional grounding to the right to privacy in Indian law, bases that the Court can – and should – draw upon in order to decide an eventual CMS/bulk surveillance challenge in a principled manner.

We can commence by emphasizing the distinction between two sets of privacy cases, a distinction that the Court has failed to appreciate so far. One set of cases involves privacy claims between private parties. Examples include a hospital revealing a patient’s medical records (Mr X v Hospital Z), or one spouse tapping the other’s phone (Rayala v Rayala). Now, these cases involve the infringement of a privacy right, but they do so as a matter of private law, not constitutional law. As a matter of principle, the remedies would lie in tort – the tort of invasion of privacy, for instance, or breach of confidence. The Court’s invocation of Article 21 in these cases must be deplored as a serious mistake. Article 21 is sets out a constitutional right, and unless otherwise expressly provided by the Constitutional text (see, e.g., Article 15(2)), constitutional rights are applicable vertically against the State, and not horizontally between individuals. Once again, a simply hypothetical will illustrate the absurdity of cases like Rayala: A murders B. Very obviously, the law governing this incident is the Indian Penal Code, which defines murder and prescribes the punishment for it. A has not violated B’s Article 21 right to life by murdering him. Now, there is something to be said for philosophical arguments that challenge the public/private State/individual dichotomy as a matter of first principle. That, however, is not our concern here. Whatever the philosophical validity of the distinction, there is little doubt that our Constitution subscribes to it quite explicitly, by having a Part III in the first place, and with provisions such as Articles 13 and 32.

There is one way of reconciling these cases. That is to read them not as invoking Article 21 as a ground for the decision, but invoking it to infuse the right to privacy with substantive content. That is, the private law right to privacy and the constitutional right to privacy, while rooted in different sources and enforceable against different entities, nonetheless (reasonably enough) codify the same abstract conception of what privacy is – and it is to that end that the Court, in private-party cases, cites 21.

This is crucial, because it helps to clarify the way in which these two rights are different, and to make sense of a jurisprudence that would be hopelessly incoherent otherwise. The difference lies in the standard for justifying an infringement. In the private-party cases, the Court – rightly – treats the matter as balancing various rights and interests involved of the different parties to the case. Mr X v Hospital Z, for instance – as understood by the Court – required a balancing of the patient’s right to privacy against his future in-laws right to know about prior, debilitating medical records in order that there be informed consent to the marriage. Small wonder then, that in these cases the Court – again, rightly – cites Article 8 of the ECHR, and analyses them in the language of proportionality.

In cases involving the State, however, we have seen that the Court has (almost uniformly) insisted upon the far higher standard of compelling State interest. Again, there is a logic to this distinction. The importance of maintaining a private sphere against State intrusion, the extent to which the State now has the power to intrude (as we have all seen over the last six months), considerations that ultimately go to the heart of maintaining a free and democratic society – all justify (if not necessitate) a higher standard. Once we understand this, it is possible now to understand why the Supreme Court has adopted one test in some cases, and another test in other cases. The justification is a principled one (even if the Supreme Court might not have been aware of it).

Proceeding, then, to the Article 21 constitutional right to privacy. The Court has located this within Article 21’s guarantee of personal liberty. In the early cases – Kharak Singh and Gobind – the Court understood the philosophical foundations of privacy to lie in the idea of individual dignity; that is, the basic thought that in order to live a dignified life, one must be able to have a sphere of action that is free from external invasion (this, essentially, is what is meant by the phrase, often used by the Court, “the right to be left alone). The dignitarian justification of privacy is to be sharply contrasted with another justification, that held the field in American Constitutional law for a long while: the propertarian justification, that grounds privacy in the idea that government is to keep off private property. This is what is meant by the Supreme Court’s slogan, “the right to privacy belongs to persons, not places.”

Ultimately, possibly, the basic philosophy is similar – advocates for property rights argue that without a certain measure of private property, an individual cannot live an independent and dignified life. Practically, however, the shift encodes an analytical difference. A propertarian foundation – concretely – would involve a set of spaces that are placed out of bound (e.g., the Fourth Amendment’s list of “homes, papers, effects” etc.) The dignitarian foundation would extend its scope to acts and places with regard to which persons have a reasonable expectation of privacy. Naturally, this will – and has – led to different results in practice, with the dignitarian foundation leading to more expansive privacy protection.

The persons-not-places justification also led the Supreme Court to reject the third-party doctrine, according to which privacy interest is lost when personal effects are voluntarily handed over to a third party. In Canara Bank the Court emphasized that the character of those items – their personal nature – does not change simply because their location has changed. The privacy interest is retained, whether they are bank records, or telephone details.

These are the contours of the privacy right. Naturally, it is not absolute, and the Court has taken pains to specify that on numerous occasions. What, then, justifies an infringement? The Court has consistently called for a “compelling State interest“, one that rises beyond the simple “public interest” encoded in the 19 restrictions. Side-by-side with compelling State interest, the Court has also required – although it has never expressly spelt it out – the restrictive law to be narrowly tailored. In other words, the government must show that its infringing law not only achieves the compelling State interest, but does so in a way that restricts privacy in the narrowest possible manner. If there are other conceivable ways of achieving the same goal that do not infringe upon privacy to the extent the impugned law does, the law will be struck down. We see this in the police surveillance cases, where in Gobind, for instance, the Court read into Regulation 855 an additional requirement of gravity, to ensure that it was narrowly tailored; and we see it even more clearly in the phone-tapping cases, where the Court’s rules require not only specification of persons, numbers and addresses, but also require the State to resort to surveillance only if other methods are not reasonably open, and in so doing, to infringe privacy minimally. Targeting, indeed, is critical: all the surveillance cases that we have explored have not only involved specific, targeted surveillance (indeed, S. 5(2) of the Telegraph Act only envisages targeted surveillance), but the very fact that the surveillance is targeted and aimed at individuals against whom there are more than reasonable grounds of suspicion, has been a major – almost dispositive – ground on which the Court has found the surveillance to be constitutional. Targeting, therefore, seems to be an integral aspect of narrow tailoring.

I do not mean to suggest that the above is a complete philosophical account of privacy. It ignores, for instance, the very legitimate concern that creating a private sphere only serves to justify relations of non-State domination and oppression within that sphere – both symbolically, and actually (see, for instance, the infamous marital rape exception in Indian criminal law). It presumes – instead of arguing for – the basic philosophical idea of the ultimate unit of society being indivisibly, atomized individual selves living in hermetically sealed “zones” of privacy, an assumption that has come under repeated attack in more than fifty years of social theory. I hope to explore these arguments another day, but the purpose of this series has been primarily doctrinal, not philosophical: to look at surveillance in the framework of established constitutional doctrine without questioning – at least for now – the normative foundations of the doctrine itself.

Our conclusions, then, summarized very briefly:

– the right to privacy is an aspect of Article 21’s guarantee of personal liberty, and is grounded in the idea that a free and dignified life requires a private sphere

– one does not necessarily lose one’s privacy interest in that which one hands over to a third party

– an infringement of privacy must be justified by a compelling state interest, and the infringing law must be narrowly tailored to serve that interest

As far as the CMS, Netra and other dragnet surveillance mechanisms go, it is clear, then, that they implicate a privacy interest; and to justify them, the government must show that there is no other way in which it could achieve its goals (of combating terrorism etc) without bulk surveillance on an industrial scale.

If recent judgments of our Supreme Court do not exactly instill confidence in its role as the guarantor of our civil liberties, its long-term record in national security cases is even worse. A.K. Gopalan, Habeas Corpus and the 2004 PUCL come to mind as examples. It is therefore unclear how the Court will rule on a CMS/surveillance challenge. One thing is clear, though: the privacy law jurisprudence that it has developed over the last fifty years provide it with all the analytical tools to fulfill its constitutional mandate of protecting civil liberties. Consistent with the narrow tailoring test, the Supreme Court ought not to allow the government to baldly get away with asserting a national security interest, but require it to demonstrate not only how national security is served by dragnet surveillance,  but also how dragnet surveillance is the only reasonable way of achieving national security goals. The possibility of abuse is too great, and the lessons that history teaches us – that totalitarianism always begins with pervasive governmental spying over individuals – is to be ignored at all our peril.

In that respect, last week has been significant, as it witnessed the beginnings of the pushback against the American surveillance state. In his opinion on the Columbia Circuit Bench, which we referred to on this blog, not only did Judge Leon hold the NSA spying program likely to be unconstitutional, but notably, he refused to accept NSA claims of national security on their face. He went into the record, and found that out of the 54 instances that the NSA had cited of allegedly foiled terrorist plots, it had miserably failed to prove even one where the outcome would have been different without bulk surveillance. This is a classic example of how narrow tailoring works. And later in the week, the Review Panel set up by President Obama emphatically rejected the contention that bulk surveillance is a necessary compromise to make in the liberty/security balance. Given this, and given the worldwide pushback underway against such surveillance measures, from Brazil to Germany, it would be a constitutional tragedy if the Supreme Court ignored its own well-crafted jurisprudence and let the government go ahead with bulk surveillance on the basis of asserted and unproven national security claims. Tragic, but perhaps not entirely unexpected.