Coronavirus and the Constitution – XXIV: Aarogya Setu and the Right to be Forgotten [Guest Post]

[This is a guest post by Karthik Rai.]


Introduction

While the Puttaswamy case recognized privacy as intrinsically embedded in Art. 21 of the Constitution of India, it was simultaneously conceded that health concerns would trump privacy considerations, if there were through necessary and proportionate intrusions into individuals’ privacy [paragraph 180]. In light of the Covid-19 pandemic currently gripping the world, one such purported governmental intrusion into the citizens’ privacy was the introduction of the ‘Aarogya Setu’ app (‘the App’) to track users’ movements and ascertain if they are at the risk of contracting the virus.

Let me briefly describe the App. The App obtains details regarding the users’ name, sex and medical antecedents, to mention a few, and its usage is propelled by mobiles’ Bluetooth and GPS services. These details, under certain circumstances, get uploaded onto the server which is then accessed by the government to respond appropriately. There had been various concerns with the privacy policy (‘Policy’) of the App which compelled the government to release an updated Policy with various changes. However, criticisms have still persisted – on its static Digital Identity Number (‘DiD’), its requirement of GPS being excessive and not in line with global standards, and its lack of transparency – all of them seemingly infringing privacy disproportionately.

However, through this piece, I provide a hitherto-unexplored perspective to the App’s Policy. First, I will be proving that the Policy contains a substantial phrasal fallacy, intentional or not; next, I will show this affects the RTBF and its related concomitants, undermining user privacy. Finally, I shall conclude with suggestions on how to alleviate the problem.

The Phrasal Fallacy and its Consequences

Clause 1(d) of the Policy (it can be accessed here) states that the App collects locational information in fifteen-minute intervals – basically, the App stores data about the places users visited. It also states when said data will be uploaded to the server. Clause 3(b) addresses data retention apropos information collected under Clause 1(d), and posits three different time periods for data retention, based on the category the data falls in:

Category 1: If the data is not uploaded to the server, not having satisfied the conditions mentioned under Clause 1(d), it gets ‘purged’ within 30 days from the App.

Category 2: If the data is uploaded to the server, two further situations arise:

If the person tests negative for Covid-19, the data will be purged from the Server within 45 days of upload.

If the person tests positive, the data will be purged from the Server within 60 days of being cured.

While Category 1 entails data deletion from the App, Category 2 concerns deletion from the Server. So, if a person’s data under Clause 1(d) has been uploaded to the Server, there is no provision providing for the deletion of the same information from the App, implying it could remain on the App, indefinitely.

Clause 1(d) stipulates three situations under which the data gets uploaded to the Server: when the person tests positive for Covid-19, when ‘self-declared’ symptoms indicate a probability of being infected, and/or when the self-assessment test returns a ‘yellow’ or ‘orange’ result.

The assessment is conducted by algorithms whose criteria are unclear. Therefore, reports have stated that misidentifications are highly possible. A similar mechanism is present in China, and such predictive data-assessment has been inaccurate. Therefore, even mere suspicion could lead to a ‘yellow’ outcome, mandating a data transfer to the server. This would then mean that the user falls within Category 2, and his/her data would be deleted from the server but would linger in the App indefinitely, without violating the Policy.

Clause 2(e) states that the data collected under Clause 1(d), will not be used for purposes other than those mentioned in Clause 2. However, under Clause 2, the use mandated for Clause 1(d) data is only for the replicated data uploaded onto the Server. So, no use has been prescribed for the original data the App collects, which means Clause 2(e) does not exactly apply to it. Thus, it could be used for anything as long as it is not uploaded to the Server. Additionally, the data present on the App is not even encrypted into DiDs.

Clause 1(a) data, which contains personal attributes like name, gender, etc., remain as long as the account remains. Clause 1(a) data is first ‘collected’ in the App and subsequently ‘stored’ on the server. Thus, both Clause 1(a) and 1(d) data, in many users’ cases, can remain indefinitely on the App (and thus the mobile), and an accurate map of the places the user has visited can be charted, easily combinable with his/her personal attributes.

The government recently issued a slew of directions in order to increase usage of the App, including making the installation of the App mandatory for all employees in both the private and the public sector. Astonishingly, the Noida police has stated that not having the App on your smartphone would constitute a crime, possibly attracting imprisonment. In light of these developments, it becomes all the more important to understand how the problematic Policy could proliferate privacy violations, contravening fundamental principles of data protection.

Purpose Limitation and the Violation of RTBF

Purpose Limitation (‘PL’), an essential prerequisite for data protection, states that the collection of data must be for a specific purpose. The ‘data principal’ – used to refer to persons whose data is processed – must know the reason for which they provide data voluntarily. Therefore, the limit of data usage by the government must be constrained by the informed consent of the user.

The Supreme Court has held in the Aadhaar judgement that purpose limitation is integral for executive projects involving data collection – unless prior permission is provided, third parties cannot be provided access to personal data [paragraph 166]. This principle is embodied in S.5 of the yet-to-be-implemented Personal Data Protection Bill, 2019 (‘the Bill’). PL, as stated earlier on this blog, enhances transparency in data processing, and helps examine the proportionality of the mechanism used to collect data for a specific purpose. Moreover, as Siddharth Deb writes, it prevents the emergence of permanent data ‘architectures’ based on interlinking databases without consent. Stemming from this is an implicit expectation of RTBF. In order to understand and appreciate the relevance of RTBF, it becomes pertinent to establish the jurisprudence pertaining to the same in India.

The Right to be Forgotten: A Brief History

RTBF grabbed headlines after the popular Google Spain case, where a case was filed by a Spanish citizen against Google requesting the erasure of links that concerned forced sale of certain properties he owned due to debts, indicating financial hardships. The Court of Justice of the European Union ruled in the citizen’s favour by acknowledging that his right to be forgotten, and therefore his privacy, were being violated. Since the information had become “irrelevant” and “inadequate”, he had a legitimate claim to get such data removed under the EU Directive 95/46; thus he could be ‘forgotten’ from the internet [paragraphs 93-94].

However, the trajectory of the evolution of RTBF in India was slightly different, due to the absence of the right being grounded in statute. The Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011, which was India’s first legal framework recognizing the need to protect the privacy of personal data, had no mention of RTBF. Therefore, contrasting judgements on RTBF emerged.

To exemplify, the Gujarat High Court in Dharamraj Bhanushankar Dave v. State of Gujarat held that there was no law under which the petitioner could claim that he had a right to ensure the removal, from the internet, of a court judgement he was a party to; therefore, his arguments were insufficient to establish a successful case of violation of Art. 21 of the Constitution. However, a judgement reported four days later, {Name Redacted} vs The Registrar General, recognized RTBF explicitly, though in a limited sense. The petitioner’s request to remove his daughter’s name from a judgement involving claims of marriage and forgery was upheld by the Karnataka High Court. It held that recognizing RTBF would parallel initiatives by ‘western countries’ which uphold this right when ‘sensitive’ cases concerning the ‘modesty’ or ‘reputation’ of people, especially women, were involved [paragraph 5].

However, it was only in the Puttaswamy judgement that RTBF was unequivocally recognized by Justice Kaul as residing in Art.21, which guaranteed privacy. He noted that the recognition of RTBF would imply that, if an individual desired to remove his/her personal data from the virtual space, it ought to be respected, if said personal information served no ‘legitimate interest’, was ‘incorrect’, or was not ‘necessary’ or ‘relevant’ [paragraph 69]. However, he did concede that RTBF was subject to reasonable restrictions based on countervailing rights like free speech [paragraph 69]. Similarly, in Zulfiqar Ahman Khan vs M/S Quintillion Business Media, the Delhi High Court recognized RTBF as ‘inherent’ in the right to privacy [paragraph 9], thereby ordering the removal of internet articles that would sully the plaintiff’s reputation ‘forever’ [paragraphs 7 and 8].

Applying RTBF to the App’s Policy

In light of the judicial interpretation of RTBF in India, especially post Puttuswamy, it is clear that, once the purpose of the data submitted is completed, data principles have a right to be erased without unwarranted intrusions into their privacy. This is embodied in S.20(1)(a) of the Bill. RTBF is founded on the dignity of an individual, which could be tarnished if the information is not erased.

In the instant case, personal data is uploaded by users to the App for a limited, specific purpose – to ascertain their health’s status. Once the purpose of this data is completed, an automatic deletion of the data collected must have been effected, as Prof. Schönberger suggests. Moreover, the data collected by the App constitutes ‘digital footprint’, as the data upload is by the principals themselves, and not by other parties. Thus, after the purpose is complete, neither public interest nor any violation of free speech will occur by deleting such information from the App. Thus, RTBF in this case should have been absolute.

However, if a user is denied this right by not deleting his/her personal information from the App, issues of identification of such persons arise. There have been instances of inter-app communication, wherein one app permeates another to extract information from the latter. This app then sends this data to an external server, which could be of another country as well. Then, the data could be used for any purpose. This could lead to microtargeting, finding out what medicines you use, etc., all of which violate privacy.

Besides, identification based on such data could precipitate widespread social media abuse. In South Korea, where similar surveillance methods were used recently, detailed timelines of people’s locations were uploaded on social media, with information like the place of residence giving reasonable indications of who the person was. A man was accused of infidelity after one of his locations on the app in Korea was near a brothel. Such online harassment impacts people’s psyche, and has also led to suicide.

The Right to Informational Self-Determination (‘RISD’)

RTBF is grounded in the fact that a citizen has control over his/her digital footprint, and thus no competing claim can use the information for anything else. Only the user has complete control over their data. The data principal must be equipped with the ability to retain personal control over personal information. In Puttaswamy, the judges emphasised the criticality of informed consent and informational autonomy, in line with European data privacy practices [paragraph 177], and any use of data contravening such consent would be ‘unauthorized’. Consent, therefore, is not a one-time permission, but must be obtained each time a new, specific use of the information is needed.

In the App, if the user tests ‘yellow’ or is Covid-positive, the information is uploaded to the Server from the App, voluntarily. However, 45 or 60 days after such information transfer, as the case may be, the purpose of sharing information is complete. The data principle had ascertained that only the government could control such information, and, that too, for a specific period, which has lapsed. Thus, RTBF automatically operates, and, respecting the users’ RISD, consent for further use of data should immediately terminate. However, since the App does not purge this information, users’ locational data could be illegitimately used for any purpose, by anyone. This violates their RISD.

Conclusion

The Supreme Court in Puttaswamy stated how the transgression of the right to privacy is subject to reasonable restrictions [paragraph 26, Sapre J.]. Therefore, the infringement must be backed by law, must be proportionate to the specific objective sought to be achieved, and must be the least intrusive measure.

There is no clarity on the legal underpinning behind the App; it can be surmised that it has been envisioned under either the vaguely-provisioned Epidemic Diseases Act, or the National Disaster Management Act, both of which provided extensive executive discretion. However, a chassis of clear regulations has not been designed to collect and reveal information about travel history, sex, etc. Thus, there is no specific law backing such executive action.

This is aggravated by the fact that the Bill has not been passed yet; thus, statutory grounds to regulate data collection and processing are still unavailable. Coupled with the fact that judges are deferential towards executive actions during such testing times, a challenge to the App based on Art.21 may not be sustained. Mere assurances by the government about protecting privacy will not suffice, as evidenced by Singapore, where, despite the government’s guarantees, user data was published in great detail, online.

It is not difficult, thus, to surmise that data protection is a desideratum for constitutional inspection. The Bill must soon be implemented and the App’s privacy policy recalibrated to pass the scrutiny of the Bill’s provisions, including RTBF, and purpose limitation. This would ensure a legitimate legal backing. Additionally, ensuring open access to the App’s source code is all-important. This would facilitate greater transparency and attenuate privacy flaws, thereby rendering the privacy intrusion by the App the least intrusive alternative.

Countries like China South Korea, which have managed to reduce Covid-19 cases through measures mirroring the App, have substantially infringed their citizens’ privacy, with the citizens condoning the same as a necessary trade-off to achieve greater efficiency of the measures. However, this institutionalizes the ‘culture of tolerance’ of repeated and excessive privacy violations, giving the government greater confidence to effect more blatant privacy violations in the future. Thus, in light of the abnormal times we are countenancing, the govt has to implement the Bill, recalibrate the Policy, and take other necessary measures to achieve an optimal trade-off between efficiency and privacy.

3 thoughts on “Coronavirus and the Constitution – XXIV: Aarogya Setu and the Right to be Forgotten [Guest Post]

  1. Superb techno legal analysis of the App by Mr. Karthik. A common man may not be aware that thru this app one can easily compromise their privacy and movements. People are easily carried away by the propaganda given by the Govt and blindly believe all Govt based Apps can hide their privacy. Infact, I totally agree with the argument especially with respect to cross communication among the Apps installed. It is absolutely possible. More than anything as we know most of the people would be using mobile based bank Apps for all sorts of payments. All the details of accounts, the passwords we key in etc can easily be phished off by other Apps. Many are not even aware of any security features of this App.

    A wonderful and an eye opener writeup. Great mixing of technical and legal analysis. Govt. should seriously think on these lines to protect the privacy of individuals.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s