The Bombay High Court’s split verdict on Government Fact-Checking (Part II: Free Speech Issues)

The first blog post in this series (here) examined how the two Judges of the Bombay High Court understood the scope and effect of Rule 3(1)(b)(v) of the IT Rules in significantly different ways. Justice Patel characterized the Rule as a broad rule that could allow for the indirect censorship of any content concerning the Union Government by threatening to remove an intermediary’s safe harbour. In contrast, Justice Gokhale interpreted the Rule to apply to cases where users intentionally share misinformation and concluded that the Rule did not require intermediaries to remove the content flagged by the Government’s Fact Checking Unit (“FCU”).

As a reminder, Rule 3(1)(b)(v) caused intermediaries to lose their safe harbour if they did not make reasonable efforts to remove content that:

deceives or misleads the addressee about the origin of the message or knowingly and intentionally communicates any misinformation or information which is patently false and untrue or misleading in nature or, in respect of any business of the Central Government, is identified as fake or false or misleading by such fact check unit of the Central Government. (emphasis supplied)”

Once they had characterised the Rule, each Judge had to decide three notable free speech issues. First, is “fake, false or misleading” speech (as identified by the FCU) a class of speech that can be restricted under Article 19(2). Second, is the Rule unconstitutionally vague? Third, does the Rule constitute a proportional restriction on free speech and offer sufficient procedural safeguards against abuse?

False speech under the Constitution.

Article 19(1)(a) provides for the freedom of speech and Article 19(2) sets out an exhaustive list of reasons for restricting speech (¶101, 153 Patel J.). These are the security, unity, and integrity of India and its friendly relations with other States, public order, decency or morality, defamation, contempt of court, or incitement to an offence. Thus, any restriction on speech must bear a nexus to one of these State interests. Crucially, as Justice Patel notes, “public interest” (or falsehood) is not a ground to restrict free speech under Article 19(2) (¶135 Patel J.). He further observes that in certain situations, where there exists a meaningful nexus with a legitimate aim in Article 19(2), false information may be restricted. For example, he asks, “Where might a piece of ‘fake news’ calling for an insurrection or an incitement to communal or other violence fall? Conceivably, this could well be within the Article 19(2) limits of ‘public order’ (¶164 Patel J.). But in such a situation, speech would not be restricted solely because it is false, but because it threatens public order – a legitimate State interest under Article 19(2). However, Rule 3(1)(b)(v) of the IT Rules restricts speech solely on the grounds that it has been flagged as false by the FCU. Thus, as Justice Patel concludes, “The impugned Rule takes up falsity per se, and restricts content on that ground divorced from, and untraceable to, any specific part of Article 19(2). That would be impermissible. (¶164 Patel J.). It is impermissible to restrict speech solely on the ground that it is false.

This is also a classic overbreadth analysis. Justice Patel’s opinion recognises that Rule 3(1)(b)(v) may restrict speech can be validly curtailed under Article 19(2) (e.g., false information that amounts to incitement). However, because the Rule goes beyond this to also potentially restrict lawful speech that has no nexus with a State interest under Article 19(2), it is an overbroad speech restriction. As Justice Patel notes, the Rule

necessarily and axiomatically makes suspect and subject to identification with no reference to cause, effect or Article 19(2)’s constraints, every single digital data chunk that relates to the business of the Central Government (¶157 Patel J.).  

Justice Gokhale’s approach to this question takes some breaking down. First, she observes that false speech would not be free speech under Article 19(1)(a) of the Constitution itself (¶¶41, 55 Gokhale J.). She finds that the right to free speech does not include the right to abuse one another, as this would undermine societal harmony and national security. This is problematic because nothing in the text of Article 19(1)(a), which simply provides for the “freedom of speech” suggests that any speech, let alone false speech, does not constitute free speech. It is not as though the Indian Constitution does not allow restrictions on speech, or indeed even false speech. For example, the Constitution itself identifies defamation (a type of false speech) as a ground to restrict speech. But this is provided for in Article 19(2). Therefore, Article 19(2) itself captures the balancing required between free speech and other societal interests such reputation or national security. Hence, it is submitted that the more convincing interpretation of Article 19(1)(a) and Article 19(2) is that all speech is first protected as free speech by Article 19(1)(a) and only restrictable on the grounds set out in Article 19(2).  

Justice Gokhale’s second answer to Article 19(2) is to note that the Supreme Court in Shreya Singhal v. Union of India held that an intermediary will only lose its safe harbour if a Court or government agency directs it to remove content, and that such a removal direction must relate to an Article 19(2) interest (¶20 Gokhale). This makes sense, because if the content removed exceeded a permissible restriction under Article 19(2), the Court or government agency would be removing lawful speech. Justice Gokhale then concludes that because Shreya Singhal held that an intermediary can only be required to remove content that is validly restrictable under Article 19(2), Rule 3(1)(b)(v), which requires intermediaries to take down FCU flagged content, is compliant with Article 19(2) (¶21 Gokhale J.).

With respect, this is entirely circular logic. Shreya Singhal held that a Court or the Government can only restrict content under the IT Rules when it is constitutionally permissible under Article 19(2). Rule 3(1)(b)(v) is a ground to restrict speech under the IT Rules, a ground that has been added to the Rules after the Shreya Singhal verdict. One cannot conclude that Rule 3(1)(b)(v) is constitutionally valid merely because Shreya Singhal said that all restrictions under the IT Rules must be valid. The court must test the validity of the Rule and provide an independent justification for a conclusion of validity that is not already part of its premise. If Justice Gokhale’s logic is applied, constitutional reasoning would be as follows: laws must comply with the Constitution, this is a law, therefore it is complaint with the Constitution. This is not a tenable approach and entirely evades the tests set out by the Supreme Court to judge the constitutionality of a free speech restriction.

Vagueness

On the question of whether Rule 3(1)(b)(v) was unconstitutionally vague, the two Judges’ divergent interpretations of the scope of the Rule comes to the fore. Because Justice Gokhale construed the Rule as applying to only situations where users intentionally shared false content, she found that this narrowed the scope of the Rule to within judicially manageable standards (¶31 Gokhale J.). She further held that a breach of the Rule would only occur if the user shared content with a reckless disregard for the truth or “actual malice” (¶37 Gokhale J.). Finally, Justice Gokhale concluded if content, no matter how critical of the Government was based on some “existent and not fake or known to be misleading” information, it would not be hit by the Rule (¶44 Gokhale J.).

In contrast, Justice Patel highlighted the difficulties in ensuing that content concerning the government was not “fake or false or misleading”. He pointed out that information about the government, indeed even government data concerning the economy, poverty, or health, was often within the realm of subjective assessment (¶119 Patel J.) He then referred to the Indian Evidence Act, and in particular the definition of “Proved”. The provision states that:

 A fact is said to be proved when, after considering the matters before it, the Court either believes it to exist, or considers its existence so probable that a prudent man ought, under the circumstances of the particular case, to act upon the supposition that it exists. (emphasis supplied)”

Justice Patel observed that the issues of fact, proof, belief, probability, prudence, and supposition, which are central to determining facts in law, was absent in Rule 3(1)(b)(v) (¶115 Patel J.). He therefore found the terms “fake or false or misleading” to be impermissibly vague because they offered no judicially manageable standards to determine restrict speech and conferred excessive discretion on the FCU.

Given that both Judges’ interpretation of the vagueness question was fundamentally shaped by their understanding of how Rule 3(1)(b)(v) operated (and that has already been discussed extensively in Part I of this blog series), we can leave this issue to rest here.   

Proportionality and Procedural Safeguards

Both Judges referred to Supreme Court precedent and acknowledge that Rule 3(1)(b)(v) must satisfy the test of proportionality. The divergence in their proportionality analysis is effectively captured by the necessity limb. Justice Gokhale found that Rule 3(1)(b)(v) is necessary because the Government is concerned about the negative impact of false information spreading on social media, which may threaten public order (¶54 Gokhale J.) However, this is not what the test of necessity requires. The necessity limb of the proportionality test requires the court to examine whether there exist any alternative measures that are equality effective. In other words, could the Government have adopted a less rights-restrictive but equally effective measure? For example, Justice Patel highlights that the Government, through the Press Information Bureau, is well equipped to independently clarify or rebut incorrect information concerning its workings (and in fact regularly does so) (¶182 Patel J.). Therefore, in his eyes, Rule 3(1)(b)(v) failed the test of necessity.

The last limb of the proportionality test concerns the existence of procedural safeguards. Justice Gokhale highlights that users whose content is taken down can approach the intermediary with a grievance, and the intermediary is required to address this grievance within 15 days (¶19 Gokhale J.). Further, she highlights that users can also approach the Government’s own Grievance Appellate Committee specifically tasked to deal with online content (¶29 Gokhale J.). Lastly, Justice Gokhale finds that merely because the FCU is staffed by members of the executive, and the information they are adjudicating the correctness of information that concerns the Union Government, it cannot be presumed that the Government will be biased (¶25 Gokhale J.). To substantiate this finding, Justice Gokhale refers Crawford Belly v. Union where the Supreme Court held that the Government’s appointment of an Estate Offer was not a ground to presume bias in the actions of the Officer (¶26 Gokhale J.).

Two points may be made here. First, the existence of procedural safeguards cannot be completely satisfied by the existence of an appeal process. There must exist meaningful safeguards against abuse in the operation of the Rule, not merely redress against the outcome. For example, there is no transparency in how the FCU operates, no requirement that it provide any reasoning for its decisions, and no requirement that a user is heard before their content is taken down. On the question of bias, it is submitted that there is a difference between an Estate Officer and individuals tasked with adjudicating the legality of speech concerning the Government. The Supreme Court has for decades closely supervised the composition of statutory tribunals precisely because of concerns over independence. Indeed, the Supreme Court has even struck down a constitutional amendment that it found to offer the executive too much influence over judicial appointments. These decisions highlight that the ex-ante composition and independence of decision-makers does matter, particularly where the government itself has a stake in the decision made by the tribunal or bench. As Justice Patel observes,

There is no safeguard against bias. There are no guidelines, no procedure for hearing, no opportunity to counter the case that some information is fake, false or misleading. […] Even more disturbingly, the Rule clearly makes the Central Government a judge in its own cause. (¶189 Patel J.)”     

Given that Rule 3(1)(b)(v) concerns the falsehood of information concerning the Government, the composition of the FCU undoubtedly matters. The FCU allows the Government to exclusively decide whether information concerning the Government in the public realm is correct or wrong.

Conclusion

These two blog posts have attempted to capture the analytical differences between the Judges on the constitutionality of Rule 3(1)(b)(v). But ultimately this case was also about citizen-State relationship and the role of free speech in mediating this relationship. This case concerned the Government claiming a unilateral right to determine what information in the public sphere about it was true. The Government inter alia argued that false speech about the government was not protected, that the Government acted as parens patriae for citizens in the informational ecosystem, and the Government was fulfilling a sovereign duty to ensure that citizens get the ‘correct’ information about the Government. These are troubling arguments that are antithetical to our understanding of democracy which is premised on citizens freely choosing amongst competing visions of the public good. Justice Patel’s opinion emphatically rejected these submissions:

I reject without hesitation the attempt to curtail Article 19(1)(a) buried in the submission that the fundamental right is to ensure that every citizen receives only ‘true’ and ‘accurate’ information – as determined by the government. It is not the business of the government to keep citizens from falling into error. It is the other way around. For it is very much the business and should be the duty of every citizen to prevent the government from falling into error. Governments do not select citizens. Citizens elect governments. (¶158 Patel J.)”  

The baton is now passed to the judge(s) tasked with resolving the split verdict.

*Disclaimer: One of this Blog’s editors (Gautam Bhatia) was a counsel in this case. He was not involved with the publication of this post.

The Bombay High Court’s split verdict on Government Fact-Checking under the IT Rules (Part I)

On 31 January 2024 a Division Bench (two Judges) of the Bombay High Court delivered a split verdict (here and here) on the constitutionality of the 2023 amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules”). Rule 3(1)(b)(v) of the IT Rules required intermediaries (such as online platforms) to make reasonable efforts to not host content that is patently false or concerned “any business of the Central Government” that has been flagged by the Union Government’s ‘Fact Checking Unit’ (“FCU”). If the intermediaries violated the Rule, they would lose the statutory immunity from liability (or ‘safe harbour’) provided to them under Section 79 of the Information Technology Act, 2000 (“IT Act”). Justice Neela Gokhale upheld Rule 3(1)(b)(v) while Justice Gautam Patel struck it down.

This two-part series critically analyses the areas of disagreements between the two judgements. This first blog examines two foundational differences about the scope and content of Rule 3(1)(b)(v). First, the judges disagreed over what the consequence of violating the Rule was, and thus whether the Rule even restricted users’ speech. Next, the judges disagreed over whether Rule 3(1)(b)(v) only prohibited content concerning the Union Government which was shared with the knowledge that it was false, or any content about the Government. These disagreements ultimately end up shaping the free speech analysis of each judge, which will be discussed in the second post (here).

Safe harbour and free speech

Justice Gokhale found that the Rule 3(1)(b)(v) did not actually require an intermediary to remove content flagged by the Union Government’s FCU. According to her, once the FCU flags content, the intermediary acts according to its “existing policy” and can either remove the content but also show a disclaimer or warning to users that the content is false (¶16-17 Gokhale J.). Crucially, she ruled that the Petitioner’s concern that merely because an intermediary loses safe harbour if it fails to remove FCU flagged content would not automatically cause the intermediary to remove the content (¶19 Gokhale J.). She noted that the loss of safe harbour would only expose an intermediary to liability for hosting unlawful speech, and the intermediary could always defend itself against such liability in court (¶29 Gokhale J.). This understanding of the consequences of loss of safe harbour is central to Justice Gokhale’s opinion as it allows her to characterise Rule 3(1)(b)(v) as a relatively innocuous measure that does not ultimately result in the removal of content or have a significant bearing on free speech.    

With respect, Justice Gokhale’s approach fails to grasp the incentives of intermediaries or the role of safe harbour in protecting free speech. Intermediaries host millions of pieces of content every day. This content is not their own content, it is that of their users. Further, intermediaries make negligible amounts of money from any single piece of content. The result of these realities is that if intermediaries are at risk of being held liable for a piece of content, the easiest thing to do for an intermediary is to remove it and avoid the time and money associated with litigating the legality of such content. This is not conjecture but has been empirically demonstrated by sending legal notices to intermediaries and recording whether they remove content or litigate these notices. Intermediaries’ unwillingness to defend their users’ speech in Court against the government is perhaps best demonstrate by the fact that not a single intermediary challenged Rule 3(1)(b)(v).

Precisely to avoid intermediaries removing content at the drop of a hat, even when it may be entirely lawful, Parliament through Section 79 of the IT Act granted them conditional immunity for hosting user generated content. The Supreme Court in Shreya Singhal v. Union of India (“Shreya Singhal”) also recognised that safe harbour was essential to protect free speech on the internet and ruled that an intermediary will only lose safe harbour if a court or government agency requires it remove content. Perhaps most crucially, safe harbour immunity protects the intermediary from liability even if the speech is illegal. In a country like India with broad laws criminalising large swathes of speech, losing safe harbour can not only embroil an intermediary in expensive and time-consuming litigation, a single adverse verdict or guilty sentence can cripple an intermediary. This creates strong incentives for intermediaries to remove the content rather than risk losing safe harbour. Thus, requiring intermediaries to remove content under threat of losing safe harbour is in reality no different from asking them to take it down.

Justice Patel’s opinion expressly acknowledges this. At paragraph 81 he notes:

Between safe harbour and user’s rights regarding content, the intermediary faces a Hobson’s choice; and no intermediary is quixotic enough to take up cudgels for free speech. Compromising one particular chunk of content is a small price to pay; better the user content is thrown under the bus than having the bus run over the entire business. The safe harbour provision is therefore not just intermediary-level insulation from liability. It is an explicit recognition of a free speech right. (emphasis supplied)”

Justice Patel’s opinion recognises that once content is flagged by the Government FCU, the intermediary’s most obvious and indeed rational course of action would be to remove this content. Unlike Justice Gokhale, Justice Patel also notes that once content is flagged by the FCU, there is no room for the intermediary to apply its own mind or its policies, it is the FCU that is the arbiter of the falsehood of the content, and the intermediary is merely required to remove it at the threat of losing safe harbour (¶73 Patel J.). Recognising this key aspect allows Justice Patel’s opinion to accurately captures the threat to free speech posed by Rule 3(1)(b)(v). Namely that requiring an intermediary to remove content flagged by the government under threat of stripping an intermediary of its safe harbour amounts to indirect takedown request by the government. Viewed in this manner, Rule 3(1)(b)(v) poses a direct risk to free speech and causes Justice Patel to engage with the doctrines of overbreadth, vagueness, proportionality, and the permissible grounds to restrict speech in a detailed manner (as discussed in Part II of this blog-series).

The knowledge requirement

Rule 3(1)(b)(v) required intermediaries to make reasonable efforts not to host content that:

deceives or misleads the addressee about the origin of the message or knowingly and intentionally communicates any misinformation or information which is patently false and untrue or misleading in nature or, in respect of any business of the Central Government, is identified as fake or false or misleading by such fact check unit of the Central Government.”

The judges disagreed over whether this text covered one or two classes of content. In Justice Patel’s view, the text outlined two different sets of content: (i) where the sender knowingly and intentionally shared information which was false or misleading; and (ii) content concerning the Union Government flagged by the FCU. Crucially, Justice Patel held that because the two sets of content were separated by the word “or”, the requirement that the content was shared with the knowledge that it was false did not apply to FCU flagged content (¶59 Patel J.). In other words, the intermediary had to remove content that was flagged by the FCU even if it was not shared with an intention to mislead. The FCU was the sole arbiter of whether the content ought to stay up or not irrespective of user intention (¶69 Patel J.).

Justice Gokhale however ruled that the requirement that the content be shared with knowledge and intent to mislead applied to even FCU flagged content (¶40 Gokhale J.). Justice Gokhale’s offers two justifications for this position, both of which, with respect, are deeply flawed. First, Justice Gokhale simply reproduces the rule as interpreted by her (i.e., “knowingly and intentionally communicates information in respect of any business of the Central Government, is identified as fake or false…”). It is trite law that a judge cannot re-write a statute in the guise of interpreting it. Here however, there is no guise, the judge has simply re-written the rule. This cannot be considered reasoning, let alone sound reasoning.

Second, Justice Gokhale states that an intermediary is granted safe harbour because of its passive role, but once an intermediary has knowledge and intent, it loses safe harbour. Thus, a knowledge and intention requirement must be read into the Rule (¶40 Gokhale J.). This is a conflation of two different knowledge requirements, the knowledge of the intermediary and the knowledge of the user. A perusal of Rule 3(1)(b)(v) demonstrates that it is concerned with the sender’s knowledge. The Rule restricts content that is knowingly or intentionally communicated. It is users who communicate information and therefore Rule 3(1)(b)(v) targets situations where a user knowingly shares misinformation. Completely independent and unrelated to this is the question of whether the intermediary has knowledge of unlawful content on its network. Prior to the Supreme Court’s decision in Shreya Singhal, an intermediary would lose safe harbour if it had knowledge of unlawful content on its network but failed to remove it (post Shreya Singhal, the intermediary does not lose safe harbour until it receives a court order requiring takedown). It is submitted that Justice Gokhale’s reasoning that because an intermediary (used to) lose safe harbour upon having knowledge of unlawful content on its network, Rule 3(1)(b)(v) should be interpreted to require that even FCU flagged content by shared with a user’s knowledge that it is false is an incorrect conflation of two entirely unconnected knowledge requirements.

One final observation: Reading in a knowledge requirement may seem to diminish the risk to free speech because it raises the threshold for content that can be flagged by the FCU. However, three points need to be noted here. First, because this interpretation is at odds with the text of the Rule itself, there is no guarantee that the government officials at the FCU interpret the Rule as Justice Gokhale does. Second, verifying the intention of internet users in a time-bound manner is neigh impossible and opens the door to incorrectness and abuse. Third, there are no procedural safeguards to ensure the FCU does interpret the Rule this way or that.  

Conclusion

These foundational differences between the two Judges cause them to characterise the disputed Rule in diametrically opposed manners. In Justice Gokhale’s opinion, Rule 3(1)(b)(v) does not require or cause the removal of content. Further, by circumscribing the Rule to only cases where users intentionally share false information, the rule has a narrow field of operation. However, in Justice Patel’s opinion, Rule 3(1)(b)(v) represents an indirect restriction on speech concerning the Union Government that is enforced by threatening to strip intermediaries of safe harbour. Further, because Justice Patel finds that the Rule could be applied to remove any information concerning the Union Government regardless of why it was shared, it raises the spectre of indirect censorship. Understandably, these conclusions on the scope and effect of Rule 3(1)(b)(v) substantially influence each judge’s analysis of the free speech risks the Rule raises. Both Judges’ discussion on issues of overbreadth, vagueness, and the permissibility of restricting false speech under Article 19(2) will be discussed in the subsequent blog post (here).   

*Disclaimer: One of this Blog’s editors (Gautam Bhatia) was a counsel in this case. He was not involved with the publication of this post.

The Karnataka High Court on Twitter’s complaint: Carte blanche to the government

This week, the Karnataka High Court delivered its judgement on Twitter’s legal complaint against the Union Government’s ‘blocking orders’ under Section 69A of the IT Act. The orders restricted numerous tweets and entire user accounts on the social media platform. The various constitutional and legal infirmities of Section 69A have previously been discussed on this blog (here) and (here); but to briefly recap: Section 69A permits the Union Government to restrict access to online content in the interests of the sovereignty, integrity, security, or defence of India, its friendly relations with foreign states, public order, or the incitement of any offence relating to these categories. Section 69A also stipulates that the reasons for blocking content must be recorded in writing and Rule 8 of the 2009 Blocking Rules stipulated that when the Union Government seeks to block content, it must “make all reasonable efforts to identify the person or intermediary” hosting the content, issue them a notice, and offer them a hearing prior to blocking content. 

The case boiled down to two key issues. First, whether Section 69A empowered the Union Government to block entire accounts, as opposed to individual tweets. Second, whether the Government was required to communicate its reasons for blocking to the users whose content was being blocked, as opposed to merely communicating the reasons to Twitter. The High Court ruled against Twitter on both counts. Even more worryingly (if not surprisingly), the High Court’s judgement is less a holistic and nuanced examination of these issues within the broader context of how censorship is operationalised online, and more an example of how courts can use every interpretive tool in the book to grant the government carte blanche to restrict free expression online.

Blocking accounts instead of tweets

Twitter’s contention that Section 69A only permitted the Government to block individual tweets and not entire accounts rested on two arguments. First, Twitter argued that blocking entire accounts is disproportionate because a single account has a multitude of tweets, only some of which may be unlawful. Thus, Twitter argued that the Government’s restrictions should be limited to unlawful tweets, as blocking entire accounts would restrict any lawful tweets associated with the blocked account.  

The High Court dismissed this argument by simply stating “An exercise to differentiate amongst the individual tweets in an account and segregate the offending ones from the innocuous, is impracticable and would not serve the statutory purpose.” (pg. 93-94). The inadequacy of this reasoning is evident from an examination of both the technological and governmental capacities of the State and the applicable legal standard applied by other Indian courts. Anybody who has used Twitter recognises that it is possible to acquire the URLs for individual tweets as distinct from the URLs of accounts, which makes it technologically possible to differentiate between tweets and accounts. Moreover, as the High Court documents in the judgement, the Government itself blocks individual tweets in some cases, and entire accounts in other cases. Therefore, the government itself differentiates between tweets and accounts and possesses an internal rationale for when a tweet should be blocked and an arguably higher threshold for when an account should be blocked. Thus, the Karnataka High Court has adopted an approach that is less rights-protective than current governmental practice – a truly remarkable achievement for any constitutional court.

The High Court’s reasoning also flies in the face of the proportionality standard adopted by Indian courts since Puttaswamy (which the High Court references several times but fails to apply). Blocking an entire account (and numerous lawful tweets) to curtail a single or handful of allegedly unlawful tweets is not the least restrictive measure available to the government, and depending on the context, is disproportionate. The Karnataka High Court’s unwillingness to engage in a meaningful proportionality analysis here can be contrasted with the approach taken by the Delhi High Court faced with a similar conundrum.

In Dept. of IT v Star India, the Delhi High Court had to determine whether to block individual webpages or an entire website for hosting copyright infringing content. Just like a Twitter account consists of lawful and unlawful tweets, an entire website consists of lawful and unlawful webpages. Rather than merely say it is too difficult to differentiate which webpages and the website, the Delhi High Court came up with a list of factors to consider before blocking an entire website such as: (i) the comparative importance of the rights at issue; (ii) the availability of less restrictive measures; (iii) the costs of implementing the measures; (iv) the efficacy of the measures; and (v) whether the dominant activity of the website was unlawful. Rather than adopt a similar approach (which is faithful to the proportionality standard laid in Puttaswamy) and require the Government to evaluate similar factors before initiating account level blocking, the Karnataka High Court simply rules that such an exercise would be impractical.   

Twitter’s second argument against the blocking of entire accounts was that the language found in Section 69A, “generated, transmitted, received, stored, or hosted,” contemplated an ex-post restriction on already published content, and could not be applied to future content. Crucially, a user’s Twitter account is not merely a repository of all tweets they have posted, but also the mechanism to post new tweets. Thus, blocking a user’s account not only restricts their previous tweets but also prohibits them from posting new tweets, which Twitter argued was beyond the scope of what Section 69A permitted. 

In rejecting this argument, the High Court ruled that the “verbal form” of the words remained the same even if their effect is extended to the future tense. It is submitted that the legal effect of extending Section 69A to encompass future restrictions on unpublished content is drastically different from applying it to existing content. While restricting published speech attracts Article 19(1)(a) and proportionality scrutiny, blocking an account to prevent future speech is an act of prior restraint – the most severe form of free speech restrictions. Imposing prior restraints on speech is to say that any speech by the speaker is presumptively unlawful, forcing the speaker to seek legal relief before exercising their constitutionally conferred free expression rights even with respect to lawful speech. The Karnataka High Court clearly knows this because it noted that “arguably innocuous” future speech may be restricted if Section 69A is interpreted to permit account blocking (pg. 58). However, the Court justifies this by stating that the statute must be interpreted broadly to give effect to Parliament’s intention to empower the Government to block unlawful content online.

 In effect, the High Court has examined the text of Section 69A, which are clearly framed against already published content (content that is ‘generated or hosted’) and by permitting the blocking of accounts, has interpreted it to encompass prior restraints on future speech. In other words, the Court has interpreted the terms of the statute to expand governmental power to restrict free expression on the ground that Parliament intended to confer the government with the broadest possible power to block content. However, as we shall see, the High Court’s magnanimity towards Parliament’s intentions do not apply to portions of the statute that confer rights on users.

Communicating reasoned orders to users

Twitter’s second prong of challenge was that the Union Government had failed to provide reasoned orders with respect to the content being blocked, and users were not provided with a notice and hearing prior to their content being blocked. Recall that Section 69A requires that orders contain “reasons to be recorded in writing” and Rule 8 of the Blocking Rules requires that the Government make reasonable efforts to notify the “person (i.e., user) or intermediary.”

In rejecting Twitter’s contentions, the High Court noted that the orders and the reasons therein were disclosed to Twitter at various meetings (regarding which users were neither notified nor present). The Court then observes that where a statute requires an authority to record reasons in writing, the precepts of natural justice ordinarily require that such reasons be communicated to the person whose rights are being impacted by authority’s decision. However, the High Court goes on to note that this is not always the case and given that the government on the whole had complied with procedural fairness by disclosing the orders to Twitter at the above-mentioned meetings, the non-communication of reasoned orders “in a formal way” would not vitiate the orders (page 78-79). With respect to whether users had to be given a copy of the orders and a hearing, the High Court observed that Rule 8 of the Blocking Rules used the phrase “person or intermediary” nor “person and intermediary” and that a literal interpretation of this language led to the conclusion that the Government only had to notify Twitter, not the users.

With due respect to the High Court, it is submitted that every limb of the above reasoning is deeply flawed. First, the providing of the reasoned order to the impacted individual is at the heart of procedural due process, not an optional extra that can be dispensed with based on the broader conduct of the government in a given instance. Beginning with Maneka Gandhi and the fusing of due process requirements into the evaluation of restrictions of other fundamental rights (including free expression), the government must provide a notice, opportunity to be heard, and an impartial tribunal when restricting constitutional rights. Further, these reasons must be given to the individual whose rights are aggrieved. This ought to have been amply clear to the High Court from a reading of Shreya Singhal v Union. When interpreting Section 69A in 2015, the Supreme Court observed,

It is also clear from an examination of Rule 8 that it is not merely the intermediary who may be heard. If the “person” i.e. the originator is identified he is also to be heard before a blocking order is passed. […] Thirdly, reasons have to be recorded in writing in such blocking order so that they may be assailed in a writ petition under Articles 226 of the Constitution.  

It is clear that the interpretation of Section 69A adopted by the Supreme Court (which the High Court ought to have been bound by) conceived of a regime where: (i) the Government made reasonable efforts to find individuals whose content was being blocked; (ii) the Government issued identified users a notice containing the reasons why their content was being blocked; (iii) users could approach High Courts and challenge the Government’s reasons for blocking their content. Admittedly, there may exist edge cases where the Government is unable to identify the users or where notifying users may be counter-productive (I have discussed this here). Rule 8 requires the Government to make “reasonable efforts” to identify the user concerned.  However, the High Court, by interpreting the phrase “person or intermediary” literally to mean, either or, effectively exempts the Government from making any efforts to identify and notify users. This is particularly egregious as the Government has blocked tweets by Members of Parliament, Members of State Legislative Assemblies, and journalistic organisations that are eminently identifiable.

Attentive readers by now would have noticed the divergence in the High Court’s interpretive approach. When interpreting the scope of the Government’s power to restrict speech, the High Court adopts an expansive interpretation, extending the Government’s power to include prior restraint over future speech on the ground that Parliament intended to empower the Government to act against unlawful speech online. However, when interpreting the procedural safeguards that protect citizens rights against the Government, the High Court adopts a literal interpretation, strictly or narrowly construing the procedural constraints on Government power that protect individuals’ rights. Arguably, if the statutory text discloses Parliament’s intention to empower the Government to act against unlawful speech, the statutory text also discloses Parliament’s intention to protect user’s due process rights in the face of this government power. But not in the High Court’s eyes.  

Conclusion

At the start of the judgement, the High Court examined the question of whether Twitter (as a foreign company) can even approach the Court under Article 226 of the Constitution. Although the High Court concluded that Twitter could approach the Court, the Court is at pains to emphasise that: (i) Twitter itself does not possess any Article 19(1)(a) free expression rights as it is not a citizen; and (ii) Twitter cannot espouse the free speech rights of its users (who are Indian citizens). When these observations are paired with the High Court’s above-discussed findings, the fabric of rights-deprivation woven by the High Court comes to the fore.

Social media platforms sit between users and governments, with users relying on platforms to exercise their free expression rights, while governments equally rely on platforms to enforce restrictions on free expression. The structure created by the High Court’s judgement is this: (i) the Government can rely on platforms to restrict citizens’ free expression; (ii) platforms themselves cannot exercise free speech rights against the Government; (iii) platforms cannot espouse the free speech rights of their users; (iv) users can be denied the minimal due process that would enable them to challenge the Government’s restriction of their free speech rights. The result is to carve out online free expression as a realm where the Government can act with impunity, platforms who possess the information required to challenge government restrictions are legally barred from doing so, and the users whose actual rights are being infringed, can be kept in the dark.        

The Phantom Constitutionality of Section 69A: Part II (twitter v the Union)

Part one of this two-part series (here) examined how the Supreme Court’s upheld Section 69A and the Blocking Rules on the assumption that there existed adequate safeguards against lawful content being taken down. However, we noted how in practice, the Blocking Rules offered few to no procedural safeguards against the takedown of content by the Government, raising the risk that lawful speech may be curtailed without oversight or contestation. We then noted how challenges to the application of Section 69A and the Blocking Rules like the one in Tanul Thakur offered courts a chance to revisit these procedural safeguards, making them meaningful and strictly enforcing them. This post analyses another such challenge to the application of Section 69A, Twitter’s writ petition in the Karnataka High Court.

Twitter is not challenging the constitutionality of Section 69A, it is merely arguing inter alia that with respect to a set of thirty-odd blocking orders, no notices were issued, nor hearings granted to the individuals who created and uploaded the content (ie, ‘originators’). Twitter thus contends that the blocking orders it has received are not in compliance with Section 69A and the Blocking Rules and cannot be affected.  

Recall that Rule 8 of the Blocking Rules requires the Government to notify “the person or the intermediary” that is hosting the content the Government wishes to block. When the Government passes a blocking order against content on Twitter, the Government sends the blocking order to Twitter as the relevant intermediary. Twitter’s lawsuit appears to confirm what commentators have been noting for years, that in practice originators are regularly not granted a notice and hearing under the Blocking Rules, but rather blocking orders are merely issued to intermediaries such as Twitter, which are legally compelled to remove the content under threat of imprisonment (see Section 69A(3) of the IT Act).

Situating Twitter’s challenge

A couple of important points may be noted at this stage. First, the requirement of confidentiality under Rule 16 of the Blocking Rules, coupled with the lack of notice to originators, creates a system of secret censorship where the Government directly communicates content it wishes to block to technology platforms behind closed doors. Neither citizens nor judges are permitted to examine the contents of blocking orders, creating serious rule of law and free speech concerns. These problems are amplified as platforms have few incentives to defend their users’ content against the Government and can be expected to simply take down all content the Government wants them to (this also makes Twitter’s challenge at least superficially commendable).

Second, there may be legitimate situations where the Government cannot locate the originator. In fact, Rule 8 of the Blocking Rules is categorically phrased as an obligation to “make all reasonable efforts to identify the person or intermediary who has hosted the information” so as to notify them and grant them a hearing to defend their content. Thus, from a due process perspective, the crux of Twitter’s petition comes down to answering when Rule 8 permits the government to not identify the originator and supply them with the blocking order. This may be further broken down into the following sub-questions: (1) do both the intermediary and the originator need to be notified under Rule 8; (2) who is responsible for notifying the originator, the government or can the intermediary also do so; and (3) when has the government made “all reasonable efforts” to locate the originator?  

Maintainability of Twitter’s petition

Before examining these questions, it is relevant to note that the Union has raised a preliminary objection to Twitter’s writ petition. It argues that Twitter’s action is effectively a writ petition to secure the free speech rights of Twitter’s users, and Twitter, as a foreign corporation neither has free speech rights under the Constitution nor can it sue on behalf of its users to secure their constitutional rights. Twitter on the other hand, has chosen not to frame this as a free speech issue, instead characterising the legal action as mere judicial review of administrative action. In other words, Twitter alleges that by failing to notify originators, the Government is not following the procedure under Section 69A and the Blocking Rules, and its writ petition is merely one to ensure that the Government complies with the procedures set up by statute.

It is obvious that where a statute (or delegated legislation) requires the Government to carry out an act in one way, the act must be carried out in that way and no other (Laxami Devi v State of Bihar) Therefore, if the Blocking Rules require the originator to be notified, then the Government cannot block content without notifying the originator. Thus, Twitter’s challenge may be viewed as merely a right to compel the government to act in accordance with the law, without getting into murkier questions of whether platforms can sue for the free speech rights of their users.

There are other reasons why the High Court may consider overruling the Union’s preliminary objection. First, as the proceedings in Tanul Thakur demonstrate, ordinary citizens who do possess free speech rights cannot challenge government blocking due to a lack of notice and hearing, and the confidential nature of blocking orders (Thakur was denied the blocking order even under the RTI Act). It would be counter-intuitive for the High Court to uphold the Union’s maintainability objection on the ground that originators, not Twitter, must challenge the blocking orders when the subject matter of Twitter’s challenge are the very structures that are preventing originators from challenging blocking orders and Twitter is the only entity in possession of the blocking orders. Further, given that Indian courts have diluted the issue of locus standi to the point of obliteration in writ petitions, Twitter’s standing in this case would be an odd doctrinal hill to die on (eg, a counterfactual is, would the court have dismissed such a petition if filed by a ‘public spirited citizen’?).

Notifying originators

The Union notes that Rule 8 uses the phrase ‘originator or intermediary’, and not ‘originator and intermediary’, thus contending that supplying the intermediary alone with the blocking order would satisfy the requirements of Rule 8. However, the blocking order would certainly constitute a restriction on the originator’s right to free speech, after all it is content that the originator created or uploaded that is being blocked. Indeed, the Union’s own objection to the maintainability of Twitter’s petition argues that it is originators, not intermediary, who possess free speech rights against blocking orders. Thus, irrespective of the constitutionality of the State’s restriction, it cannot be denied that the originators free speech rights are engaged by blocking orders under Section 69A. Once the originator’s legal and constitutional rights are engaged, it flows naturally that both due process and the principles of natural justice demand that Rule 8 be interpreted in a manner that provides originators with the blocking order and an opportunity to contest it.

As noted previously on this blog (here), such an interpretation also clearly flows from the decision in Shreya Singhal v the Union, where the Supreme Court observed:

 It is also clear from an examination of Rule 8 that it is not merely the intermediary who may be heard. If the “person” i.e. the originator is identified he is also to be heard before a blocking order is passed. 

Where the originator behind online content has openly taken ownership or is prima facie identifiable and contactable, there is no justification for them not being notified and it stands to reason that the Government itself can notify them. However, there may be situations where the originator is not easily identifiable to the Government but is identifiable to the platform. For example, a Twitter user account may have a username ‘xyz’ but Twitter itself may possess the user’s email id or the ability to notify the user within the platform. The Union has contended that in such situations, Twitter itself could have informed its users of the blocking orders. However, this situation is complicated by the existence of Rule 16 of the Blocking Rules, which states that all actions taken pursuant to a blocking complaint must remain confidential. It is unclear whether Rule 16 prevents an intermediary such as Twitter with supplying the relevant user with the blocking order, and the Karnataka High Court should clarify that it does not. This would preserve the anonymity of the user should they desire it, but also allow them the opportunity to contest the blocking order if they so desired.

Notifying identifiable users or permitting platforms to providing their users with the blocking orders would certainly fall within the realm of ‘reasonable efforts’ to notify originators under Rule 8. There may be narrower situations concerning websites, where the originator is not immediately identifiable and there does not exist a centralised authority like a grievance officer at a social media platform that can notify individual website owners. What constitutes ‘reasonable effort’ to notify in such cases may have to be developed on a case-by-case basis, eg, the government could notify domain name registrars, who could notify the owners of the website. Nobody is suggesting the government be required to use investigatory resources to identify and notify originators, but notice should be delivered to publicly available contact information.  

Alleged need for secrecy

As discussed above, Rule 16’s confidentiality requirements must at least be read narrowly enough to allow the originator to be provided with the blocking order. However, even this interpretation does not address our earlier rule of law concern of the government of government and technology companies colluding behind closed doors to block content. This practice is hit directly by another facet of Article 19(1)(a), the right of all citizens to receive information (Ministry of Information & Broadcasting v Cricket Assn of Bengal). When content is blocked under Section 69A, not only is the originator’s right to speak curtailed, but every citizen’s right to receive (the blocked) information is also restricted. Thus, in principle, any citizen should be entitled challenge a blocking order as it restricts their right to receive information under Article 19(1)(a). Once this is accepted, Rule 16 cannot stand in its current form, as every citizen has a right to see the blocking order and potentially challenge the restriction on their right to receive information in court. A first step to ensuring this would be to mandate that, when someone attempts to visit a blocked webpage, the Government and internet service must display a notice indicating the webpage has been blocked under Section 69A.   

The Union has contended that issuing notices to originators may alert them and cause them to evade law enforcement or cause them to be more aggressive and spread content through other accounts. However, it must be noted that Section 69A is not an investigatory provision. The Government is not utilising Section 69A to apprehend originators, but merely block their content. If the Government is in parallel investigating the identity of the originators, the simple answer is to not utilise Section 69A till they are apprehended. Blocking their content is as likely to alert unlawful actors as a notice is, and it does not stop them spreading similar content from other accounts or locations. There may be a narrow set of circumstances in which the Government may not wish to disclose the content being blocked, eg, to not draw attention to websites that habitually host unlawful content such as child pornography or the glorification of violence. However, in such situations, the government’s rationale for not disclosing an order that restricts content should be testable by courts. As noted by the Supreme Court in Anuradha Bhasin v the Union:

As a general principle, on a challenge being made regarding the curtailment of fundamental rights as a result of any order passed or action taken by the State which is not easily available, the State should take a proactive approach in ensuring that all the relevant orders are placed before the Court, unless there is some specific ground of privilege or countervailing public interest to be balanced, which must be specifically claimed by the State on affidavit. In such cases, the Court could determine whether, in the facts and circumstances, the privilege or public interest claim of the State overrides the interests of the petitioner.

Conclusion

Before the internet, if the Government sought to restrict speech, it had the ability to target a few central locations such as newspapers, magazines, broadcasting organisations, and public figures. With the advent of the internet, ordinary individuals have been given an unparalleled ability to shape public discourse, increasing democratic participation. However, despite millions more speakers, resilience to censorship has not necessarily increased because the concentration of speakers on online platforms such as Facebook, YouTube, and Twitter constitutes a key vulnerability and vector through which censorship can be operationalised. Current practice, where the Government issues orders to social media companies and has content blocked is in some ways even less resilient to censorship. Unlike newspapers and broadcasters who are directly incentivised to protect their own content, social media companies have few incentives to defend their users’ content. Users are kept in the dark, and even if notified may lack the legal resources that a traditional media company has to contest the State’s restriction.

Thus, provisions such as Section 69A and the Blocking Rules represent the last bastion of protection against State restrictions on free speech in the online domain. Ensuring due process is essential to ensuring that if a blocking order breaches constitutional standards on free speech, it can be meaningfully challenged before a court. Such provisions must also reflect the architecture of the internet. For example, different types of intermediaries (a website, a social media platform, a chat forum) may have different levels of information on originators, and when the government is deemed to have made “all reasonable efforts” to identify the originator should be tailored to the type of intermediary hosting or transmitting the content.

Lastly, due process also requires state capacity. While the majority or originators who have the content blocked by the Government may choose not to contest the blocking order (eg, if they know their content to be illegal), thousands of others may choose to challenge it. Even to ensure due process under the Blocking Rules, the Government will need to facilitate hearings for originators within the 48-hour period for anybody who decides to challenge a blocking order. With the government issuing thousands of blocking orders a year, State capacity represents another dimension to the due process challenges posed by online speech.     

The Phantom Constitutionality of Section 69A: Part I

Section 69A of the Information Technology Act, 2000 (“IT Act”) empowers the Union Government to block content on the internet. The provision has been on the statute books for over a decade and survived a constitutional challenge in 2015. Nonetheless, subsequent legal and factual developments have once again shone a light on some of the provision’s underlying constitutional defects as well as the government’s use of this power to restrict online expression. Perhaps most prominently, the rapid proliferation of the internet and its centrality to shaping public discourse has led to a manifold increase in the use of the government’s blocking power, with the number of orders passed under Section 69A increasing from 471 in 2014 to 9,849 in 2020

For those unfamiliar with Section 69, this first of two posts briefly re-caps the contents of the provision and the Supreme Court reasoning in finding the provision constitutional. It then examines gaps in the Supreme Court’s judgement that have been exploited by governmental practice since, and how legal challenges to the provision’s application have sought to fill these gaps. The next post builds on these developments to analyse Twitter’s challenge to specific blocking orders in the Karnataka High Court.

Section 69A, its Contents and Constitutionality

Section 69A empowers the Union Government to block the public from accessing “any information” on the internet when: (1) the Government believes it is “necessary or expedient”; and (2) it is in the interests of the defence, sovereignty, integrity, or security of India or its relations with foreign states, public order, or the incitement of a cognisable offence relating to these categories. When the government decides to block content, it must follow the procedure set out under the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009 (“Blocking Rules”).

When blocking content, Rule 8 requires the Government to “make all reasonable efforts to identify the person or intermediary who has hosted the information” and issue them a notice asking them to explain why the content should not be blocked. The person uploading or initiating the content transmission (ie, a content “originator” under the IT Act) or the intermediary has 48 hours to respond. Based on the information available at the end of the 48-hour period (which may or may not include the originator/intermediary’s response) the Government shall pass an order on whether the content should be blocked. The Blocking Rules also contains an emergency procedure (Rule 9), under which the hearing is not required, and the Government merely passes an order directing the content be blocked. There does exist a “Review Committee” which is expected to provide post facto scrutiny, examining whether the Government orders only blocked content on the grounds set out by Section 69A (security of India, public order etc.). Lastly, Rule 16 states that “strict confidentiality” is to be maintained with respect to actions taken under the Blocking Rules.

The constitutionality of Section 69A was challenged and upheld by the Supreme Court in Shreya Singhal v the Union (2015). The Court observed that blocking could only be resorted to in the interests of the security or defence of India, public order etc., which the Court highlighted were the same interests Article 19(2) of the Constitution permitted free speech to be restricted on. According to the Court, this demonstrated that Section 69A restricted expression in line with constitutional standards on free speech. Further, the Court ruled that the Blocking Rules ensured sufficient procedural safeguards against the Government restricting lawful speech by providing not just the intermediary, but also the content originator, with a hearing (if the latter could be identified). Crucially, the Court noted that blocking orders had to be made in writing, thus ensuring that “they may be assailed in a writ petition under Article 226 of the Constitution.” In other words, if the government’s decision to block specific content was ever in doubt as impermissibly restricting free speech, the blocking order in question could be challenged before a court under its writ jurisdiction.

Gaps in the Supreme Court’s analysis

Some gaps in the Supreme Court’s reasoning were evident on the day of the judgement, while others have been revealed over time. As has been argued on this blog before (here), the Court’s rationale that ‘because Section 69A permitted the executive to restrict speech on the same grounds as Article 19(2), the provision was constitutionally compliant’ is flawed. This reasoning fails to acknowledge the distinction between: (1) constitutional standards that State action must be tested against by courts; and (2) statutory language that that sets out when the government can interfere with rights. The former constitutes a set of legal principles to determine when the latter is compliant with the former. For example, Article 19(2) permits restrictions on speech in the interests of ‘decency and morality’. When the Cinematography Act was challenged, the Supreme Court examined guidelines issued under the Act as to what constituted ‘restrictions in the interest of morality’; restrictions were to be made in cases of ‘lowering the sacredness of the institution of marriage and depicting rape, seduction and criminal assaults of women, and solicitation’. The existence of such specific language allowed the court to evaluate whether the type of content the government sought to restrict in fact was in the interests of ‘decency and morality’.   

Incorporating constitutional standards directly within statutes within no greater specificity results in a breakdown of constitutionality scrutiny by courts because the statute has adopted the broader standard identical to the constitution, leaving nothing for the statute to be tested against. Take the following example: the Constitution says ‘state interventions are constitutional in the interests of public order’; the statute says ‘the government can restrict speech in the interests of public order’ but provides no further guidance. Logically, a court cannot test whether a statute expressly stating to be in the interests of public order satisfies the constitutional test of ‘is the statute in the interests of public order’. Such a statute is thus prima facie constitutional but in truth public order includes a broad sweep of situations with a wide margin for edge cases that may at first glance engage public order interests but on further scrutiny not be within the confines of public order. This means courts will have to examine every instance where such a law is applied to determine whether it satisfies the constitutional standard of public order. The result of such a statute is effectively, ‘the government can interfere when it is constitutionally permissible to do so’, eliminating judicial scrutiny at the time of constitutional challenge but in practice allowing the executive to determine what is constitutionally permitted until each instance of the law being applied is challenged before a court.   

At this point it is worth appreciating that Courts in India typically judge the constitutionality of a statute in the abstract, ie, is the text of the statute (not the application of the law vis-à-vis a particular individual) constitutionally permissible. Now, if the text of the statute effectively states, ‘the government will act constitutionally’, such a law can never be tested against constitutional standards (and consequently never struck down) because such a statute amounts to nothing more than a promise on the government’s behalf to act within the inherently blurry confines of the Constitution. Two possible avenues of recourse may exist against such a statute. First (as noted here) that such a statute may be challenged as delegating excessive power and insisting that the statute set out more specific instances in which State intervention is permitted, so that courts may then test whether such instances would satisfy constitutional standards. Second, the application of the law to specific individuals and situations may be challenged, allowing courts to differentiate between constitutional and unconstitutional governmental actions within the broad sweep of what the statute authorises. Developments since the Supreme Court upheld Section 69A demonstrate how this latter approach may play out.

Challenges to the application of Section 69A

Recall that the Blocking Rules require the Government to provide 48 hours’ notice and a hearing to the originator or intermediary prior to taking a decision on blocking content. In Tanul Thakur v Union, the Petitioner alleged that his satirical website ‘Dowry Calculator’ had been blocked under Section 69A but he had not been served notice or granted a hearing. Several commentators have noted (see here and here) that the requirement for notice and hearing is a rule honoured more in the breach than in the observance, with the government often directly requiring intermediaries to block content and refusing to disclose blocking orders, citing the confidentiality requirements in Rule 16. This practice is particularly egregious in situations such as Thakur’s case, or the blocking of the Caravan’s twitter account, where the originator has publicly taken ownership for the content, dispelling any notion that the Government could not locate and notify the content originator.

In Tanul Thakur’s challenge, the Delhi High Court directed the Government to provide the content originator with a copy of the blocking order and a post-facto hearing as to why his content should not continue to be blocked. While the High Court did not delve into a discussion on the constitutionality of blocking the satirical website, the order is an acknowledgement of: (i) the need to offer originators an opportunity to contest restrictions on their free expression; (ii) the failure on behalf of the government to offer such a hearing (the government appears not to have contended the order was passed under the emergency procedure) and (ii) the importance of supplying the originator with a copy of the blocking order. The order by the High Court takes on special significance given that the Supreme Court expressly upheld Section 69A on the understanding that the blocking order would be made available, allowing for court challenges to individual instances of content blocking. Thus, Tanul Thakur’s case represents a challenge to the application of Section 69A that, at the very least, takes us back to how Section 69A was intended to operate by Parliament and the Shreya Singhal court. It should serve as valuable precedent mandating the disclosure of the blocking order to the originator and the grant of a hearing, ultimately facilitating a challenge under Article 226 before a High Court.

Conclusion

In Tanul Thakur, the petitioner discovered his website was blocked, filed an RTI to obtain a copy of the blocking order, was denied a copy of the blocking order on the basis of Rule 16’s confidentiality requirements, and had to approach the High Court to obtain a copy of the order. Now having obtained the order, the petitioner can in earnest begin to assert his constitutional argument that the satirical website ‘Dowry Calculator’ does not fall under the heads of security, defence, integrity of India, public order, or incitement to an offence. This is a single instance of content blocking, with the Government blocking thousands of other such instances every year.  

Most importantly, Tanul Thakur represents how, independent of lawfulness of the speech being blocked, the application of Section 69A and the Blocking Rules raise significant questions of due process. Individuals cannot meaningfully assert that their free speech is being restricted online absent vital procedural safeguards. Unlike older mediums such as print and broadcast, where government restrictions were applied against a handful of clearly identifiable organisations, Section 69A represents a new frontier of government restrictions on speech where the content of a multitude of citizens is blocked, often through intermediaries such as Google, Twitter, and Facebook. With the decision in Shreya Singhal creating a preliminary bar on an immediate constitutional (re-)challenge to Section 69A, the courts scrutinising how the Government is applying Section 69A will likely shape how due process, the rule of law, and constitutional speech protections apply within this new paradigm of free speech. Twitter’s recent challenge to specific blocking orders before the Karnataka High Court represents one such opportunity and will be discussed in the next post.

“Freebies” at the Supreme Court

In Ashwini Kumar Upadhyay v the Union, the Supreme Court is currently hearing a petition challenging the practice of political parties to promise “irrational freebies” if elected to power. This post examines past instances where courts have been asked to curb identical practices, most notably the distribution of colour TVs and laptops by state governments in Tamil Nadu. However, unlike those cases where the Court was faced with concrete schemes (identifying the exact schemes alleged to be a “freebie”); the petition in Ashwini Kumar Upadhyay is a plea to restrict the practice in the abstract. Indeed, a perusal of the courtroom exchanges suggests substantial disagreement over what constitutes a “freebie”, highlighting the problem with engaging with such issues in the form of public interest litigation. The crux of the issue can be summed up by two observations by the Chief Justice. First, that ‘India is a welfare state and individuals want public distribution schemes, but money should also be spent on building infrastructure’, and second, there is an element of fiscal irresponsibility with such schemes.

The disagreement over what constitutes a “freebie” and the Chief Justices concerns raise two principled issues: (i) how to decide what is ‘good’ and ‘warranted’ public expenditure (e.g., should the government build roads to increase real income by facilitating economic activity, or should it directly give cash to people), and (ii) who should choose amongst competing approaches to ‘good/warranted’ public expenditure. Examining past decisions of the Court it is apparent that what constitutes ‘good’ public expenditure has turned on interpreting the phrase ‘for a public purpose’ in Article 282 of the Constitution (more on that later). On the second question, while it is traditionally the elected government that decides what approach to pursue, the petition in Ashwini Kumar Upadhyay seeks judicial regulation of what kinds of approaches an elected government should be allowed to promise voters, shifting the balance away from elected governments to the judiciary and technocrats. This is demonstrated by the Court’s 3 August order constituting an ‘Expert Committee’ to decide whether parties should be allowed to offer “irrational freebies”.

This post begins by examining how the Court, when confronted with the promises by Tamil Nadu governments of TVs and laptops, effectively refused to engage in this area. Relying on Amartya Sen’s ‘capabilities’ framework, it then argues that deciding what constitutes ‘good’ public expenditure is not a technical problem, but rather a value judgement by each citizen on whether a governmental measure will help them achieve the goals they themselves desire in the circumstances they face. Given this, the answer to the second question of ‘who decides’ must be – the citizens. The post concludes by suggesting approaches the Court could adopt that are consistent with its constitutional role and past decisions.

Colour TVs in Tamil Nadu

As recently as 2013, the Supreme Court in Subramaniam Balaji v Tamil Nadu heard a challenge to the practice of both DMK and AIDMK governments promising to distribute (at government expense) inter alia colour TVs, grinders, and laptops to citizens if elected to power. Among several issues, the Court had to determine: (i) whether such promises were “corrupt practices” (namely bribery) under Section 123 of the Representation of the People Act, 1951 (“RPA”); and (ii) whether these schemes were for a “public purpose” under Article 282. This was because Article 282 provides that the Union or States can make a grant (i.e., expenditure) on any subject, even if not empowered to pass laws on the subject, if the expenditure is for a “public purpose”. Prior to Subramaniam Balaji, a Constitution Bench in Bhim Singh had interpreted the term “public purpose” broadly, as anything that furthered the Directive Principles of State Policy or the goals of political, social, and economic justice found in the Constitution’s Preamble. However, the petitioners in Subramaniam Balaji argued that the distribution of household appliances did not further these objectives and thus expenditure on these items fell foul of Article 282.

On the question of bribery under Section 123 of the RPA, the Court in Subramaniam Balaji ruled that the promise to distribute goods if elected did not constitute bribery. The Court noted that almost every promise in an election manifesto (even if not promising “freebies”) was a promise of some benefit to induce an elector to vote in a particular way (¶53). For example, the Court noted that even a promise to develop a particular locality was effectively a promise to induce a voter. Noting that Section 123 was a penal provision that to be interpreted strictly, the court ruled that manifesto promises were not hit by Section 123 of the RPA. This highlights the definitional problem of what constitutes a “freebie”. Almost every promise by a political party, from loan waivers, to grain distribution, to distributing TVs, results in a benefit to someone, and the Court was unwilling to effectively criminalise the promise of a better, future government.

However, this raises the issue of whether there is a material difference between building roads or simply handing over TVs, which was dealt with under the framework of Article 282’s “public purpose”. Is building roads expenditure for a “public purpose” while handing over TVs not for a “public purpose”. The Court ruled that an elected government was within its rights to choose amongst competing ways to improve individuals’ livelihoods. Justice Sathasivam for the Court,

“The concept of State largesse is essentially linked to Directive Principles of State Policy. Whether the State should frame a scheme, which directly gives benefits to improve the living standards or indirectly by increasing the means of livelihood, is for the State to decide […] The concept of livelihood and standard of living are bound to change in their content from time to time. It is factual that what was once considered to be a luxury has become a necessity in the present day. (¶¶ 57, 61) (emphasis added)”

To recap, on the question of what constitutes ‘good’ or ‘warranted’ public expenditure, the Court in Subramaniam Balaji tied “public purpose” in Article 282 to the broad concept of improving livelihoods but held that the provision did not discriminate between competing approaches to improving livelihoods.

 On the question of who decides, the Court was perhaps even more emphatic – not it.

“Judicial interference is permissible when the action of the Government is unconstitutional and not when such action is not wise or that extent of expenditure is not for the good of the State. We are of the view that all such questions must be debated and decided in the legislature and not in the court (¶62).”

Capabilities

The Court’s approach to the two questions of what constitutes ‘good’ expenditure and who decides finds strong support in Amartya Sen’s, Development as Freedom. Sen notes that different philosophical theories offer competing answers to maximising public welfare. For example, utilitarianism may argue that public expenditure should maximise the ‘utility’ or ‘well-being’ of citizens (often translated into real income). However, Sen notes that such singular metrics fail to capture the diversity of individuals and the circumstances they face. The classic example provided is that of a bicycle: if the government were to distribute bicycles to all citizens, the value (or ‘utility’) derived from the bicycle would be radically different for an abled bodied and a disabled person. However, diversity is not limited to personal characteristics, but the physical and social environments individuals face (e.g., a rural and urban dweller), the relational diversity amongst citizens (e.g., as hinted by Justice Sathasivam, a laptop may be a necessity for some and not for others), and the diversity in family structure (e.g., improved roads may not result in added utility to a woman if her husband does not let her drive).

Faced with this diversity, Sen argues that public welfare must be evaluated by the extent a measure enhances the freedom of individuals to pursue outcomes the individuals’ themselves value in the circumstances they face (i.e., an individual’s capabilities). From this lens, the answer to whether public expenditure is ‘good’ or ‘warranted’ is not a static conception of “public purpose” or maximising utility, or long-term economic growth, but whether the expenditure enhances the ability of individuals to pursue outcomes they value. This may be different for a disabled person, an urban dweller, and a woman. This inescapably has a bearing on the second question of who decides what types of public expenditure should be undertaken.

Because the touchstone for public expenditure is whether it enhances an individual’s ability to pursue goals they themselves value, Sen argues that deciding amongst competing governmental measures is not a mathematical or technical question, but rather question of “valuation and judgement”. He thus rejects placing such decisions in the hands of technocrats (like the Supreme Court’s Expert Body). Further, because this judgement has a social element, the “acceptability to others” of any decision arrived at is crucial. As individuals are best placed to decide if roads or cash will help them pursue the goals they desire, the process of deciding must be one where all individuals have their say. Democratic process ensure that this decision making is divided equally across the electorate and thus the outcome sufficiently legitimate that individuals are willing to accept the result even if it isn’t exactly what they wanted. As Subramaniam Balaji notes, this discussion is best had in legislatures, and in the case of poll promises, amongst and by the electorate.

Conclusion

Perhaps most troubling of all is that we have been here before. In the aftermath to Subramaniam Balaji, while dismissing the petition the Court requested the Election Commission of India to frame certain guidelines on the types of promises that could be included in election manifestos. These guidelines, now an annexure to the Model Code of Conduct state that: (i) manifestos cannot contain anything repugnant to the ideas and principles of the Constitution; (ii) while no objection may be taken to the promise of welfare schemes, promises which contravene the ‘purity of the election process’ should be avoided; and (iii) manifestos should reflect the rationale of welfare promises and indicate the manner in which they will be finances.

It is submitted that these guidelines offer a direction the present hearings in the Supreme Court should take. Promoting scrutiny and enhancing public reasoning of public expenditure is a valuable goal the Court is equipped to undertake. This would also combat the risk of fiscal irresponsibility highlighted by the Chief Justice. The Supreme Court (minus the farce that is now Electoral Bonds) has a long history of empowering the Election Commission to compel disclosure and aid informed decision making by voters. Rather than enter the quagmire of whether “freebies” are a waste of public expenditure, the Court could focus on enforcing the Election Commission’s guidelines on explaining the financing for welfare schemes. For example, the Congressional Budget Office in the United States provides independent analyses of budgetary and economic proposals. The Office is set up by statute, has strict rules on independence, and does not make policy recommendations because (in words Sen would approve off) ‘public policy inevitably involves value judgements that the agency does not and should not make.’ This represents an elegant balance between enhancing public scrutiny of government expenditure while letting citizens choose amongst competing approaches to their own welfare.   

Can the Government’s Appellate Committee for online speech compel platforms to host content?

In June 2022 the Ministry of Electronics & Information Technology unveiled proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“Intermediary Guidelines”). The amendments propose the creation of one or more Grievance Appellate Committees (“GAC”) to hear appeals against platform decisions to remove or keep-up disputed content. Users can approach the GAC in two situations: (i) if the user complains against content and the platform fails to remove it; and (ii) if the platform voluntarily takes down a user’s content or suspends a user’s account, and the user wishes to reinstate their content or account. The GAC can direct the platform to either remove or reinstate content respectively.

The amendments raise several questions, beginning with whether the creation of the GAC is within the rule-making powers bestowed on the Government under the IT Act. However, this post analyses the power of the GAC to reinstate content that a platform has voluntarily taken down pursuant to its own Terms of Service (“ToS”) (or ‘Community Guidelines or Standards’). Imagine a situation where a platform removes a politician’s post for violating the platform’s ToS on misinformation or hate speech, but the GAC directs the platform to reinstate such speech. In such a situation, the GAC decision would compel the platform to host speech that violates its own ToS. This post argues that under current Article 19(1)(a) doctrine, compelling platforms to host speech contrary to their ToS would violate the platform’s own free speech rights unless the reinstatement furthers the informed decision-making necessary for self-governance. Thus, the GAC’s role should be limited to analysing whether the platform’s decision to remove the content was consistent with the platform’s own ToS.

At the outset, it is important to recognise a few key points. First, a platform’s ToS forms a contractual relationship between user and platform. In other words, a platform’s ToS sets out the kinds of content the platform will allow a user to post on its site – these are often branded as Community Guidelines or Standards. Second, ToS’ are often much broader than actual laws, and platforms may take down legal speech because they believe that such speech would lower the quality of a user’s experience (often referred to as ‘Lawful but Awful’ content). Thus, when enforcing their ToS’, platforms may remove that constitutes misinformation, spam, or nudity even if it doesn’t violate any laws. Third, a user who was aggrieved with a platform’s decision to remove their content could go to court and make a contractual claim that that the user’s content did not actually violate the platform’s ToS. But ToS’ typically grant platform’s broad discretion in the kinds of content they can remove (and users have agreed to this contract); thus it is practically impossible to sue for reinstatement of content. This may also explain the government’s motivation in introducing the GAC, to provide users with a method of getting content reinstated even after it is taken down by a platform.

Compelled speech and free speech

Free speech doctrine has long protected against government interference in the editorial discretion of organisations (e.g., newspapers) to decide what content to publish and what content not to publish. This protects the autonomy interest of the organisation by allowing them to control their message – ensuring the government cannot compel an organisation to publish content inconsistent with their beliefs. It also protects democratic self-governance by ensuring the government does not distort public discourse by compelling various organisations to carry biased pro-government content. However, as with all such freedoms, the right against compelled speech is not absolute.

In Union v Motion Picture Association, the Supreme Court analysed the validity of various statutes which compelled private cinema operators to screen scientific and educational movies. Prima facie this interfered with the editorial discretion of cinema operators to decide what movies to screen. The Court upheld the statutes, but laid down the following test for when it was permissible for the Government to compel organisations to carry speech:

Whether compelled speech will or will not amount to a violation of the freedom of speech and expression, will depend on the nature of a “must carry” provision. If a “must carry” provision furthers informed decision-making which is the essence of the right to free speech and expression, it will not amount to any violation of the fundamental right of speech and expression. If, however, such a provision compels a person to carry out propaganda or project a partisan or distorted point of view, contrary to his wish, it may amount to a restraint on his freedom of speech and expression.

The test laid down by the Court is clear. Where compelled speech furthers the democratic self-governance interest behind Article 19(1)(a), compelled speech may be a permitted interference on editorial discretion. Crucially, the government could not compel cinemas to screen propaganda or biased or false information. The Court also examined the reasonableness of the restriction, specifically noting that the statutes clearly prescribed the types of movies to be screened (scientific and educational) and the ratio of compelled movies to freely chosen movies (1 to 5). Thus, the restriction was narrowly tailored towards achieving the government’s legitimate aim of furthering informed decision-making and did not grant the government unrestricted power to prescribe content.

(As an aside, this is also broadly the approach adopted by the United States Supreme Court and interested readers may refer to Miami Herald Publishing Company v Tornillo and Turner Broadcasting v FCC. The former invalidated a requirement that newspapers publish replies to editorials that the newspaper disagreed with, and the latter upheld a content-neutral “must carry” provision on cable operators on the grounds that ensuring more viewers had access to minimum number of broadcast stations furthered an informed public.)

Interfering with social media’s editorial discretion

Social media platforms exercise editorial discretion when they remove content pursuant to their ToS. Different social media companies have their own ToS’ and they are free to curate speech on their platform based on what they believe will best serve their users. For example, when former President Trump posted misinformation about election ballots, some social media platforms chose to add fact-checking labels and even take down content, while others carried Trump’s posts as is. In other words, platforms decide what content appears on their sites and what content does not, and these decisions reflect the platform’s view on what speech has value to its users – like a newspaper exercising editorial discretion on what content to print.   

When platforms have been accused of favouring certain categories of content, they have been quick to take the defence that they are ‘neutral’ and do not interfere with content. Even in court, they often take the defence that they are neutral intermediaries who cannot be asked to interfere with content or determine its legality. While politically expedient, this claim is also influenced by how Indian law regulates platforms. Section 79(2) of the Information Technology Act, 2000 states that platforms must not interfere with content if they wish to retain legal immunity for hosting unlawful content (colloquially referred to as safe harbour). The principle of safe harbour recognises that while platforms may exercise editorial discretion, the nature of this discretion is different given the high volumes of user uploaded content they host – and platforms need to be protected from the liability of any unlawful content that one of their millions of users may upload. Without safe harbour, platforms could be sued for any unlawful content they may be hosting and thus platforms aim to comply with Section 79(2).

This may suggest that platforms are indeed neutral and do not interfere with content (or exercise editorial discretion). However, Rule 3(1)(d) of the Intermediary Guidelines 2021 expressly notes that where platforms remove content voluntarily, they are not in violation of Section 79(2). In other words, Indian law expressly recognises that platforms remove content voluntarily based on their ToS – and thus exercise what amounts to editorial discretion in determining what content stays up and what content is taken down.

The GAC’s interference with editorial discretion

In the event a platform voluntarily removed content pursuant to their ToS, under the proposed amendments, a user could approach the GAC to have it reinstated. An order by the GAC to reinstate such content contrary to the platform’s ToS would be compelled speech and a direct interference in the platform’s freedom to decide what content to host and what content not to host. Under the principle set out in Union v Motion Picture Association, such an interference would be impermissible unless the content being reinstated furthered a democratic self-governance interest. The Court’s test would not permit compelling a platform to host propaganda or biased or false information.

It is also important to analyse the scope of the power granted to the government. The proposed amendments do not specify the types of content the GAC may direct to be reinstated (unlike the ‘scientific and educational’ films) nor do they specify the volume of content the GAC can direct to be reinstated. Thus, the current framing of the GAC confers unrestricted powers of interference with a platform’s freedom of speech and may not form a ‘reasonable’ restriction on free speech. This concern is exacerbated by the fact that: (i) the Union Government is responsible for appointing members to the GAC but the Union Government’s instrumentalities, or affiliates may also be parties before the GAC; and (ii) the independence and impartiality of the GAC Chairperson or Members are not guaranteed through traditional safeguards such as a transparent selection procedure, minimum qualifications, and security of tenure and salary.

Conclusion

A couple of important caveats must be made about the above argument. First, the proposed amendments are yet to be formally adopted and may undergo changes. Second, most major social media platforms are foreign actors, and the extent to which Article 19(1)(a) rights may be invoked by them remains contested. For example, the Union Government recently argued that Facebook and WhatsApp could not challenge Indian laws that allegedly violated their users privacy because they were foreign companies. Third, platforms may strategically choose to avoid expressly invoking their free speech rights to edit and curate content as it shines a light on how they structure online speech.

Lastly, the Union Government is trying to solve a real problem. Online platforms have often acted arbitrarily in taking down content (see here and here) and providing users some redress against unreasoned takedowns may be well-intentioned. However, any such mechanism must also respect the free speech rights of platforms to not carry speech that they determine violate their ToS. The newly adopted Digital Services Act in Europe for example, allows users to appeal to a dispute settlement body on the ground that the platform violated its own ToS when taking down the user’s content – i.e., the platform acted arbitrarily. If the GAC’s role was limited to ensuring that platforms are enforcing their own ToS’ fairly, by examining whether content removals are in accordance with the relevant platform’s ToS (rather than determining whether the content is legal or illegal), the GAC may provide users with recourse against platform’s that otherwise exercise extraordinary powers over online speech. Nonetheless, the GAC would still have to possess sufficient safeguards to ensure independence, impartiality, and fair procedures. In its current iteration, it risks being a tool for the State to compel platforms to host speech they may have otherwise taken down.  

Land, Citizens, and Farmers: Recognising Political Constitutionalism

Indian political and legislative processes are far from perfect. Recently, this has led to arguments (on this blog, and elsewhere) exploring the possibility of increased judicial intervention in the legislative process – where the judiciary ‘examines the validity of procedures leading to enactment’. In this post, I suggest that judicial scrutiny of legislative activity should not underestimate the power of democratic processes to produce respect for the rule of law and rights. However, this approach requires us to locate the ‘constitutionality’ of legislative action in the legitimate decision-making processes of the political system (as opposed to the legal system). Through an examination of the events surrounding the Land Acquisition Bill (2015), the Citizenship Amendment Act (2019), and the three agricultural laws (2020), I highlight how actors beyond courts may impact the constitutionality of legislation.

I briefly outline the two approaches to ‘constitutionality’ under the rubrics of legal and political constitutionalism and then analyse the three legislative events from the lens of political constitutionalism. I do not suggest a wholesale bar on judicial interventions in the legislative process (nor do authors who advocate judicial interventions suggest abandoning other means to improve the legislative process). I also do not delve into how political or legislative reforms may be achieved. The purpose of this post is merely to highlight how democratic processes can also be an avenue to achieve constitutional goods.

Legal and Political Constitutionalism

Legal constitutionalism suggests that because citizens and legislators may not always embrace the values necessary for constitutional democracy, the respect for these values needs to be protected by judges. Therefore, judges serve as a key restraint on legislative power – evaluating whether legislation satisfies constitutional values. (E.g., the Supreme Court invalidating legislation that violates the freedom of speech.) Stepping outside the grammar of rights, such exercises of legislative power typically involve substantive and fundamental competing interests (e.g., public order v free speech), and legal constitutionalism suggests that judges are best situated to settle these crucial issues. However, dissenting opinions and overruled judgements indicate that judges themselves disagree over which outcomes uphold constitutional values. Therefore, the ‘correctness’ of these outcomes largely stems from legal structures that confer jurisdiction on courts to settle these disputes and confer finality on judicial determinations on competing societal interests.

Political constitutionalism argues that ‘the democratic process is equally, if not more legitimate and capable than courts at resolving these substantive and fundamental disagreements.’ Rather that suggesting judicial oversight, it focuses inter alia on: (i) improving the democratic process through improving electoral and parliamentary systems (e.g., proportional representation and parliamentary scrutiny); (ii) creating multiple inflection points where power is balanced (federalism, off-set election cycles, and independent bodies); and (iii) political parties competing for the support of diverse interest groups who themselves have cross-cutting interests, compelling political parties to listen varied viewpoints and often compromise. Crucially, it ‘locates the ‘constitutionality’ of legislation within the political and not the legal system’ by focusing on how decision making procedures can be made legitimate through balancing institutions and ensuring transparent participation. Where legal constitutionalism may emphasise judicially policed rights as central to constitutional culture, political constitutionalism sees democratic participation as causing citizens to identify with a constitutional system.

This may sound idyllic, and caveats must be made in the Indian context. India’s political and legislative structures contain several democratic weaknesses (role of the governor, ordinance powers, anti-defection, partisan speakers, lack of intra-party democracy, imbalanced federalism). As a result, the efficacy and visibility of constraints on power envisioned by political constitutionalism may vary or be entirely absent. In all the three legislative instances discussed below, there was immense public pressure in the form of demonstrations, speeches, strikes, and vigils despite fragile protections for civil liberties. The need for such resistance to hold power accountable does not undermine the argument for political constitutionalism (such actions are firmly within the democratic process envisioned by political constitutionalism) but rather points to the urgent need to reform our political structures to allow for public opinion, contestation, and compromise through elected officials without blood having to be shed.

Readers will forgive my painfully brief explanation of three complex legal issues. The goal here is merely to identify when the structures of political constitutionalism are at play (I tag them in italics for brevity).

Land Acquisition Bill

On 24 February 2015, the Government introduced a bill (Land Acquisition Bill) to replace an ordinance which amended the Right to Fair Compensation and Transparency in Land Acquisition, Rehabilitation and Resettlement Act, 2013. The Land Acquisition Bill identified certain situations when the government did not need to obtain landowners’ consent or conduct a social impact assessment before acquiring land. The Land Acquisition Bill passed the Lok Sabha on 10 March but the government did not advance the Bill in the Rajya Sabha because it lacked a majority in the upper house (bi-cameralism and federalism). The existing ordinance was due to expire on April 5, but the Rajya Sabha session continued till 8 May. As ordinances cannot be passed when Parliament is in session, the ordinance looked certain to expire until, on 28 March, the President prorogued (terminated the session of) the Rajya Sabha, allowing the government to re-issue the ordinance on April 3, effectively circumventing Parliamentary approval. These actions were challenged in the Supreme Court as an ‘abuse of the President’s ordinance powers’.  

The Land Acquisition Bill was referred to a Joint Parliamentary Committee. In the BJP chaired committee, it was reported that all eleven BJP MPs moved amendments reinstating the need to acquire landowners consent and conduct social impact assessments (parliamentary scrutiny and intra-party contestation). However, the Land Acquisition Bill never passed the Rajya Sabha, and in August 2015 the ordinance was allowed to lapse two months before the 2015 Bihar elections (staggered electoral cycles). The Supreme Court would eventually dismiss the court challenge as infructuous.

Citizenship Amendment Act

The Citizenship Amendment Act, 2019 (CAA) allows illegal migrant to apply for citizenship if: (i) they entered India before 31 December 2014; (ii) they receive exemptions under the Passports and Foreigners Acts; (iii) they are from Afghanistan, Bangladesh, or Pakistan; and (iv) they are Hindu, Sikh, Christian, Parsi, Jain, or Buddhist. The Government claimed the intention of the law was to protect religious minorities in Afghanistan, Bangladesh, and Pakistan from persecution, and the exclusion of Muslim migrants was a ‘reasonable classification’ within the legislature’s discretion given that Muslims constituted a majority in these three countries.

However, critics of the CAA argued that for the Act to be constitutional, the classification must be connected to the purpose of the legislation. If the goal of the CAA was to protect individuals from religious persecution, then the test under the Act must be – is the individual being persecuted? In other words, the law cannot (without basis) presume that Muslims are not persecuted in Afghanistan, Pakistan, and Bangladesh. The exclusion of Muslims was thus disconnected from the goal of protecting individuals from persecution, and hence violated Article 14 (equality before law). It was also pointed out that the exclusion of Rohingya Muslims from Myanmar suggested an unprincipled (and potentially discriminatory) use of legislative power.

The adoption of the CAA led to sustained nationwide protests. The CAA was also opposed by several states and the Union Government received resolutions the legislatures of Meghalaya, West Bengal, Tamil Nadu, Kerala, and Punjab denouncing the CAA (federalism). Kerala and Rajasthan would go on to sue the Union Government under Article 131 of the Constitution over the CAA. The Government has not repealed the CAA. However, despite Parliamentary manuals requiring that subordinate legislation (rules) for the legislation be adopted within six months of a law passing, two years later, the Union Government is yet to adopt any rules for the CAA (as of writing, the Home Ministry has requested time till January 2022). In the two years since the passage of CAA, 140 petitions have been instituted challenging the constitutionality of the legislation, but the Supreme Court has yet to render a verdict.

Agricultural Laws

In 2020, the Government introduced three laws aimed at reforming the agricultural sector (Farm Laws), discussed in detail here. The laws were passed in the Rajya Sabha using a voice vote instead of a division vote. A voice vote is where the chairperson places the issue before the house and decides the vote based on whether the yes’s or no’s are louder. This may be fine to quickly dispose of issues on which there is significant consensus. However, for more closely contested votes, the process of a division vote exists (where MPs vote electronically). It stands to reason that any vote that is won during a voice vote should also be able to pass in a division vote (otherwise the vote is wholly illegitimate as the chairperson has usurped the collective decision-making of the house and replaced it with their own singular discretion). To guard against this risk, the Rajya Sabha Rules expressly provide that if the outcome of a voice vote is challenged, there must be a division vote (r. 253).

The Farm Laws were passed amongst pandemonium in the Rajya Sabha, and the Chairperson contended that opposition MPs were not in their seats when they challenged the voice vote (a claim contradicted by video footage of the day but an issue also not helped by the fact that the audio feed from the Rajya Sabha was cut for half an hour). After the Farm Laws were passed, the leader of the opposition met the President who protested the breaches of parliamentary procedure and requested the President to refuse assent (another structural inflection point). Soon after, the NDA Government lost its oldest coalition partner (the SAD) (coalitions as a restraint on power). The yearlong protests by farmers and the incident at Lakhimpur Kheri have sufficiently entered the popular consciousness that they need not be repeated here. Towards the end of this stalemate, an MP from the ruling party introduced a private members bill to secure some of the protections sought by protesting farmers (intra-party contestation). Eventually, in December 2021 the Farm Laws were repealed in the lead-up to state elections in Uttar Pradesh and Punjab (staggered electoral cycles).

Conclusion

This piece began by noting that the shortfalls in our political and legislative system to meet our needs for legitimate government can be addressed both through judicial interventions and through political structures. The goal of the above examples is to highlight how political structures such as bi-cameralism, federalism, staggered elections, coalition politics, intra-party democracy, and public protests can restrain power, arrive at decisions about contentious issues in legitimate ways, and respond to breaches of the rule of law. These structures are by no stretch perfect. However, reform of political structures must begin from an understanding that they are worth reforming. At a bare minimum, this requires a discourse which situates the constitution in the political system beyond courts. It also requires us to be able to look past political double-speak and understand when political structures are at play.

There is also reason to be cautious of increased judicial intervention. It is notable than in all three of the above examples, constitutional courts have been conspicuous in their inability or unwillingness to intervene. It is always possible to examine individual situations and argue that a court acting as it is supposed to, could have done better. But this ignores the reality that just as political actors do not always act in conformity with their roles, courts are also not perfect functionaries. Arguments for increased judicial intervention must address the reality that constitutional courts in India have their own structural and behavioural weakness. Lastly, where judicial interventions may vest more power in judges, a focus on reforming electoral systems and structuring institutions so that governmental power is constantly subject to meaningful competition and accountable to cross-cutting interests creates space for a broader set of actors to engage in constitutional authorship.

Intermediary Guidelines and the Digital Public Sphere: Balancing the Scales

The last two posts examined prominent new features of the Intermediary Guidelines 2021, automated filtering (here), and tracing first originators (here). However, the undoubted goal of the Guidelines, demonstrated by the express regulation of “Significant Social Media Intermediaries”, is to hold large social media companies accountable for their role in structuring online speech. This post examines the scheme of the Guidelines to examine how effectively they regulate social media.

I begin by examining the liability for communicating unlawful speech to understand why intermediaries are granted immunity for hosting unlawful content. Next, I explain the power imbalance between tech companies and internet users. Finally, I analyze the Intermediary Guidelines’ attempt to remedy this power imbalance by providing various reliefs to users vis-a-vis online platforms.

Speech models and their inadequacy for online platforms

In determining liability for communicative acts, the law distinguishes between publishers and distributors. Publishers are liable for the content they publish, while distributors are typically not liable for content they distribute because they are not deemed to know if they are distributing unlawful content. For example, a newspaper may be liable for an article it solicited/sourced, compiled and edited but a newsstand selling the newspaper will not be held liable because it could not be expected to know the contents of every article in every newspaper it sells. (Volumes could be written on the myriad of nuances within these concepts but acknowledging the distinction will suffice for now.)   

However, this publisher-distributor distinction breaks down in the case of online platforms. They are not neutral distributors as they actively moderate and rank content to determine what users see. But they are also not quite publishers, as they are not responsible for creating the content on their platforms. Unlike a newspaper, whose employees write the articles, on platforms, it is internet users not a platform’s employees that create content. Thus, the publisher-distributor distinction serves legislators poorly when deciding how to regulate online platforms.

Further, legislators worry that if online intermediaries are held liable for unlawful content on their platforms (like a publisher), intermediaries would either: (1) adapt their behaviour to fall under the distributor model and exercise no control over their platforms, leading to the internet being filled with the worst kinds of content; or (2) follow the publisher route and takedown any content that was remotely unlawful, leading to vast horizontal censorship by online intermediaries and defeating the purpose of the decentralised, user driven internet. Enter intermediary immunity.

Intermediary immunity and voluntary moderation

The approach taken by several legislatures, including India’s, has been to provide online intermediaries legal immunity for unlawful content on their platforms even though they filter and curate content. This approach ensures two very important goals: (1) online intermediaries can voluntarily filter content without worrying about a publisher’s liability, thus resulting in less abusive speech online for users; and (2) intermediaries have enough breathing space to not take down user content en masse, resulting in the protection of users’ free speech against horizontal censorship. Rule 3(1)(d) (third proviso) of the Intermediary Guidelines 2021 expressly notes that intermediaries will not lose their status as neutral entities even thoughthey may voluntarily remove a wide range of presumptively illegal content from their platforms.

But this is where things start to get tricky. The definition of ‘presumptively illegal speech’ set out in Rule 3(1)(b) of the Intermediary Guidelines is so broad that platforms have immense discretion in what content to take down within the scope of the rule itself. Under Rule 3(1)(b) intermediaries may takedown content that (inter alia):

(ii) is defamatory, obscene, pornographic, paedophilic, invasive of another‘s privacy, including bodily privacy, insulting or harassing on the basis of gender, libellous, racially or ethnically objectionable, relating or encouraging money laundering or gambling, or otherwise inconsistent with or contrary to the laws in force […]

(viii) threatens the unity, integrity, defence, security or sovereignty of India, friendly relations with foreign States, or public order, or causes incitement to the commission of any cognisable offence or prevents investigation of any offence or is insulting other nation […]

(x) is patently false and untrue, and is written or published in any form, with the intent to mislead or harass a person, entity or agency for financial gain or to cause any injury to any person

These are obviously very broad terms and intermediaries may reasonably disagree with their users over whether content falls within these categories. This creates a conflict between the two goals of intermediary immunity. On the one hand immunity is essential to incentivising platforms to freely take down the worst forms of content without risk of liability, but the wide discretion Rule 3(1)(b) gives them results in users eventually complaining of horizontal censorship (as has already happened on both sides of the political spectrum).

(Note: in Shreya Singhal the Supreme Court ruled that the contents of Rule 3(1)(b) should be limited to the restrictions contained in Art. 19(2) of the Constitution. This may be both normatively undesirable and impractical, as there are several categories of content that fall outside of Art. 19(2) that we may want platforms to voluntarily restrict, e.g., copyright and trademark infringing content, sexually explicit content not rising to the ‘public morality’ threshold, fraudulent content causing consumer harm etc.)

Thus, legislation/regulation must balance the protections granted to intermediaries with the risk of horizontal censorship. This is trickier than it sounds, as the underlying interests of platforms (maximising advertising revenue by the collection and sale of personal data) and users (free speech and privacy) can diverge significantly.   

The problem magnified manyfold

As I discussed in much greater detail (here), the problem of horizontal censorship is multiplied severalfold because of two crucial factors. First, large online social media platforms are central to public discourse. Second, because there are only a handful of these platforms, simply leaving a platform (or worse being suspended) can severely curtail a user’s participation in public discourse. As Kate Klonick poignantly notes, ‘we are dependent on these private platforms to exercise out public rights.’

The true extent of how vulnerable citizens are is demonstrated at several layers. First, as moderation is conducted by private entities, it is undemocratic. While platforms may respond to public outcries, this only ensures that the already marginalised are disproportionately impacted with little hope of recourse. Next, platforms do not disclose their moderation policies. While ‘Terms of Service’ and ‘Community Guidelines’ are publicly available, the rules and procedures platforms use to take down content constantly change and are rarely disclosed. For example, Facebook ‘accidentally’ blocked the hashtag ‘ResignModi’, later reinstating the content with no explanation as to the procedure employed either in the take down or reinstatement. Finally, the stranglehold online platforms have over public discourse may be leveraged by democratic governments to entrench their power by silencing criticism.

Balancing the scales

Just as gender and caste have historically constituted centres of private power, modern social media represents a new site where citizens are dominated. Regulation must seek to remedy this power imbalance but also ensure that the other interest behind intermediary immunity (incentivising voluntary moderation by not saddling platforms with a ‘publisher’s liability) is not annihilated. The Intermediary Guidelines leaves the basic structure of intermediary immunity untouched (except for several additional compliance requirements by social media companies geared towards expanding investigative and enforcement powers) but attempts to remedy the power imbalance by granting users several reliefs vis-à-vis online intermediaries.    

Just as platforms have the discretion to both take down and not take downspeech, internet users may also seek both outcomes. A user may be aggrieved by content and want an intermediary to take down content (the “Take Down Relief”). Alternatively, a user may be aggrieved by the decision of a platform to take down content, and want it reinstated (the “Keep-Up Relief”).

The Take Down Relief

One part of the Take Down Relief involves citizens approaching courts seeking orders (typically in the form of an injunction) against content being displayed online. The Supreme Court in Shreya Singhal ruled that it was impermissible to require online intermediaries to take down content merely because someone complained against a piece of content – a judicial order was necessary (¶117). The decision is in line with the above stated twin goals of intermediary immunity. By protecting platforms from liability until they received a court order, intermediaries could continue to voluntarily remove the worst forms of content from their sites. Crucially, by protecting platforms from being sued for content on their sites until a court ruled the content was (at least prima facie) unlawful, intermediaries would not take down content en masse, thus avoiding horizontal censorship. Courts would weed out frivolous complaints and direct intermediaries to take down content after balancing all the interests involved.

Thus, users do have a Take Down Relief against illegal content in the form of courts and the Intermediary Guidelines 2021 do not interfere with this relief (in fact they expressly recognise it). However, this is by no means a perfect relief, given the time and costs involved in getting a court order. Further, Rule 3(1)(b) is so broad that courts have little guidance (and a lot of discretion) when deciding to take down online content. For example, the Delhi High Court noted that some factors that should be considered prior to take down are (i) the comparative importance of the rights at issue; (ii) the availability of less restrictive measures; (iii) the costs associated with implementing the measures; and (iv) the efficacy of the measures implemented by the ISP. However, another bench of the High Court merely noted that the plaintiffs had demonstrated a prima facie case of defamation and directed the intermediary to block the content. Yet another bench directed search engines to de-index content and permitted a plaintiff to directly approach Internet Service Providers (ISPs) to block additional content uploaded after the original take down order by the court, allowing ISPs to challenge the take down if they believed the plaintiff has exceeded the terms of the original order (which they have no incentive to do).

Rolling back Shreya Singhal

Rule 3(2) of the Intermediary Guidelines 2021 also introduces a new Take Down Relief. A user may launch a complaint where content “exposes the private area of such individual, shows such individual in full or partial nudity or shows or depicts such individual in any sexual act or conduct, or is in the nature of impersonation…” The intermediary is obligated to take down such content within 24 hours of receiving a complaint. On the one hand, the Rule empowers users vis-à-vis social media platforms, and a cogent case may be made that where a user complains over explicit/intimate images of themselves uploaded against their consent, and no other interests are involved, users should not be required to go to court. (Note, they may not even haveto go to court if the intermediary agrees and takes down content once flagged, but that approach puts the relief at the sole discretion of the platform.)

On the other hand, requiring intermediaries to take down content at the behest of a private user was an approach expressly rejected by Shreya Singhal to protect against platforms being swamped by complaints and taking down all content that any other user disagreed with. The simple truth is that hard cases exist. For example, the content complained against may depict two people, one who wants the content to stay online and one who wants it taken down. Now imagine the content is also a matter of public interest. These delicate situations where rights need to be balanced are exactly why Shreya Singhal insisted on court oversight to guard against overzealous complainants and over-cautious platforms.

Efficacy probably lies somewhere between Rule 3(2) and Shreya Singhal and the Guidelines could have: (i) distinguished between cases where interests need to be balanced and where they do not be; (ii) set up an independent body/fast track process to hear take down claims; or (iii) narrowed judicial discretion on when content should be taken down. However, the Guidelines ultimately fails to either improve the court order system of Shreya Singhal but also creates a blanket workaround that may lead to content being taken down at the sole behest of a disgruntled user in delicate cases where rights ought to have been balanced by a court.   

The Keep-Up Relief

Users may also have a grievance when platforms take down content that users believe should have stayed up (e.g., see the suspension of Sanjay Hegde’s Twitter account over: (i) the use of August Landmesser’s photo defying the Nazi salute; and (ii) retweeting a poem by a CPI (Marxist-Leninist) politician). As discussed above, while users can review ‘Terms of Service’ and ‘Community Guidelines’, the actual rules and procedures employed to take down content are not transparent.

Prior to the Intermediary Guidelines 2021, a user could pursue a contractual remedy, arguing that the speech did not violate the platform’s ‘Terms of Service’. Such remedies are unlikely to succeed as contracts are typically skewed in favour of platforms and confer broad discretion to take down content. For example, Facebook’s Terms of Service state that it can “remove or restrict access to your content, services or information if we determine that doing so is reasonable necessary to avoid or mitigate adverse legal or regulatory impacts to Facebook.”

Hedge is presently pursuing a public law remedy under Article 226, arguing that Twitter performs a social/public function and should thus be required to respect his free speech (Art. 19) and non-discrimination (Art. 14) rights. Without going into the merits of this argument, such constitutional gymnastics could be avoided by legislation/regulation that directly gives users recourse against social media platforms. Enter the Intermediary Guidelines 2021.

Before we begin, it should be noted that the Intermediary Guidelines 2021 risks substantially aggravating the problem of intermediary take down itself, by incentivising/mandating social media platforms to employ automated filtering technologies. The imprecision of such technologies is likely to cause more users than ever to seek out a Keep-Up Relief. (See my previous post on the subject here.)

Rule 4(8) of the Guidelines attempts to provide such a Keep-Up Relief. The Rule states that where a Significant Social Media Intermediary voluntarily disables content, the platform must, prior to the disabling of content, provide the creator/poster of the content a “notification explaining the action being taken and the grounds or reasons for such action”. Next, the platform must also provide the user with a “reasonable opportunity” to dispute the platform’s decision and request the reinstatement of the content. This creates a regime where intermediaries must provide reasoned notice and meaningful review to internet users when content is taken down.

On the face of it, this is a step in the right direction as it forces social media platforms to employ some minimum rationality and allows users a chance to contest take down decisions. But how well will this structure of accountability hold up when confronted with hard cases (which it undoubtedly will be)? First, the obligation merely requires social media platforms to provide an opportunity for users to dispute the decision and does not improve on the standards of content evaluation to be applied by them – recall that Rule 3(1)(b) is broad and does not meaningfully restrict the platform’s discretion in decision making. Judges themselves struggle to impose uniform standards on when content should be taken down. While platforms may seek to ensure a measure of uniformity, unlike judicial decisions, the reasoning by platforms under Rule 4(8) is not required to be publicly available and no system of precedent applies – creating a risk of opaque and haphazard decisions.  

The Achilles heel of Rule 4(8)

It is also important to understand that the Intermediary Guidelines 2021 regulate the conditions for intermediary immunity, i.e., the conditions they must satisfy to not be treated as a publisher and sued for hosting unlawful content. Now, even if an intermediary breaches the notice and hearing obligation under Rule 4(8), the consequence is a loss of this legal immunity for hosting unlawful content. But the intermediary is not hosting the content (indeed the whole dispute is because the intermediary is refusing to host the content), so there is no risk for the intermediary to losing this legal immunity. Simply put, intermediaries cannot be sued for content they are not hosting, so there is no legal risk associated with arbitrary voluntary take downs. This highlights the folly of trying to offer users recourse to moderation decisions through conditions of intermediary immunity, which is what the Intermediary Guidelines are. 

Lastly, Rule 4(8) does not open the door for users to go to more legitimate body and argue for content to be reinstated. In fact, if courts were to intervene under Rule 4(8), this would undermine the structure of Section 79 and the Intermediary Guidelines, which are premised on treating intermediaries as neutral even though they voluntarily take down content. In the case of Take Down Reliefs, where Person A is aggrieved by content posted by Person B and asks the Court to direct Intermediary X to stop publishing the content, a Court can provide this relief while still upholding the legal fiction of intermediary neutrality (i.e. accepting Intermediary X’s decision to not take down the content as it is neutral like a newsstand). Section 79, the Guidelines, and Shreya Singhal are premised on not examining the correctness of Intermediary X’s decision to keep the content up.

However, where a platform has taken down content and the user seeks a Keep-Up Relief, Person A would sue Intermediary X for its decision to take down Person A’s post. A court cannot logically grant relief without: (i) acknowledging that Intermediary X is not neutral, in which case it really is more like a newspaper and should be liable for its moderation decision; and (ii) going against the protection for voluntary moderation expressly provided in the Intermediary Guidelines; and/or (iii) engaging in the sort of constitutional and interpretative gymnastics that the Sanjay Hedge case involves. All this leaves the efficacy of the Rule 4(8) Keep-Up Relief at the mercy of large social media companies, and their internal calculus as to whether offering users such recourse will garner them goodwill (see Facebook’s Oversight Board).

Conclusion

The problem the government sought to address with the introduction of the Intermediary Guidelines is a real one. Large social media companies have unbridled power over what users see online at a time when we increasingly live our lives on the internet. This problem cannot be solved unless regulation accepts two significant realities, (i) that intermediary immunity remains central to incentivising intermediaries to moderate content and protecting users against horizontal censorship; and (ii) the current moderation procedures employed by platforms lack transparency, accountability, legal certainty, and disproportionately impact marginalised communities.

By relying solely on the structures of intermediary immunity, the Intermediary Guidelines demonstrates a singular lack of creativity when it comes to regulating online platforms. One alternative approach would be a separate statute with certain minimum standards of conduct for intermediaries (no easy task, but at least one that begins with an acceptance of the problems that need to be solved). As a result, the new Guidelines not only fail to provide users with efficacious remedies to either take down or keep up online content, but it perpetuates the status quo where online platforms are free to exercise arbitrary and dominating power over Indian citizens – with the only meaningful regulatory intervention occurring through government blocking of content when platforms fail to toe the government line.   

The author is grateful to Praharsh Johorey for his comments on the draft of this post.

Intermediary Guidelines and the Digital Public Sphere: Tracing first originators

The previous post (here) set out how social media companies are key facilitators of public discourse, and structure the digital public sphere. The Intermediary Guidelines distinguish between ordinary intermediaries and ‘Significant Social Media Intermediaries’ (“SSMIs”) and Rule 4 sets out “due diligence” obligations that SSMIs must satisfy to avail of legal immunity for content shared on their platforms. In other words, a violation of Rule 4 of the Intermediary Guidelines does not itself impose liability on SSMIs, but it exposes them to a significant risk of liability given the large volumes of content being transacted on their platforms.

This post examines the requirement that SSMIs providing messaging services identify the “first originator” of specific content on their platforms pursuant to judicial or government orders. I begin by setting out the content of the requirement. Next, I briefly examine the role of secure communications and anonymity under the Indian Constitution. I then set out the technical proposals as to how a first originator may be identified and finally evaluate whether Rule 4(2) would survive constitutional scrutiny.

The ‘Traceability’ Requirement

Rule 4(2) obligates SSMIs that are “providing services in the nature of messaging” (think WhatsApp, Signal, Telegram, and iMessage) to “enable the identification of the first originator of the information on its computer resource”. SSMIs are required to comply with this obligation in two situations;

(1) where a judicial order is passed; or

(2) where an order is passed under Section 69 of the IT Act and the Information Technology (Procedure and Safeguards for interception, monitoring and decryption of information) Rules, 2009 (“IT Decryption Rules”).

The IT Act defines an “originator” as anybody who generates, transmits, or stores content. The effect of the rule is to enable the identification of the first user profile on a computer resource to generate, transmit or store a specific piece of information. While Rule 4(2) postulates a judicial order ordering identification, it does not mandate it. Orders under Section 69 are passed by senior civil servants, so there is no meaningful check on executive power. Further, the Union Government insists this is a measure to fight illegal content that has widespread reach; however, Rule 4(2) itself contains no threshold for ‘virality’ and could in principle apply to any content that was shared more than once. If there is more than one “originator”, there is de-facto a “first originator”

Rule 4(2) includes three safeguards and creates one legal presumption. First, an identification order may only be passed for the purposes of “prevention, detection, investigation, prosecution or punishment” of offences “related to” the sovereignty, integrity, or security of India, friendly relations with foreign states, public order, or the incitement of offences relating to any of these headings but also rape, sexually explicit material, or child sexual abuse. Second, an identification order cannot be passed where a less intrusive means to identify the first originator exists. Third, no SSMI is required to disclose the “contents of any electronic message or any other information related to the first originator, or any information related to its other users

Finally, Rule 4(2) also states that if the first originator of content on the messaging platform is located outside India, the first originator within India (i.e., the first person who generates, stores, or transmits the content in India) “shall be deemed” to be the first originator with respect to that content.

Privacy and Proportionality in India

In the last post we examined how social media companies constitute the digital public sphere. This is borne out empirically in the case of messaging platforms as well. In a recent study conducted by the Reuters Institute and the University of Oxford, 52% of Indian respondents reported getting their news via WhatsApp. 60% clicked on news links, 46% posted or shared news on the platform, and 39% took part in group or private discussions. Messaging platforms facilitate public discourse and allow citizens to shape public opinion, perhaps best demonstrated by the high levels of political content on these platforms. Anonymity and security thus form crucial barriers against political speech being chilled.

Messaging platforms also allow individuals to share constitutionally protected but socially stigmatised views, ensuring individual autonomy and dignity. It allows people to securely discover and express themselves, and effectively organise with other citizens to create centres of countervailing power. As the former UNHRC Special Rapporteur noted, being protected from the public gaze may allow citizens to discover and share ideas they may otherwise be persecuted for. “The ability to search the web, develop ideas and communicate securely may be the only way in which many can explore basic aspects of identity, such as one’s gender, religion, ethnicity, national origin or sexuality.” However, the security provided by privacy is especially fragile. Courts have recognised that where even the threat of surveillance exists without a remedy, there exists an interference with a citizen’s privacy.

Almost two decades ago, the Supreme Court in PUCL recognised that Indians have a constitutionally guaranteed right to communicate privately. In Puttaswamy,the Court articulated a vision of privacy grounded in individual autonomy that interacted and enabled the enjoyment of other rights guaranteed by the Constitution, most notably the right to freely and privately hold and express opinions, and associate with other citizens (¶412). In other words, privacy forms a necessary foundation to the enjoyment of the rights and privileges guaranteed by the Constitution. The Indian Constitution thus guarantees private and secure communications to both protect individual autonomy and facilitate democratic self-governance.   

Any infringement on a citizen’s right to communicate privately must therefore satisfy the test of proportionality: (1) the infringing measure must pursue a legitimate state aim; (2) the measure must substantially further the state aim; (3) the measure must be the least restrictive option amongst equally effective alternatives; and (4) the measure must not have a disproportionate impact on rights holders.

Rulemaking power

Before we examine the issue of privacy and encrypted messages, there exist a preliminary issue of the very power to frame such a rule. The prefatory text to the Intermediary Guidelines notes that the Guidelines are issued under the powers granted to the Union Government by Sections 87(2)(z) and 87(2)(zg) of the IT Act. The former grants the Union Government power to frame web-site blocking rules and the latter grants power to frame rules to regulate the immunity granted to intermediaries. In short, neither of the sub-clauses relate to monitoring or tracing content on computer networks. The government may argue that Rule 4(2) forms legitimate regulation of intermediary immunity, but this is belied by the fact that the IT Act itself grants the government to monitor and decrypt content in a separate and independent provision, namely Section 69. Section 69 has its own rule-making provision, Section 87(2)(y), and the government has already framed the IT Decryption Rules under this section.   

Operationalising ‘Traceability’

There exists a gap between Rule 4(2) mandating SSMIs to identify the first originator and the platforms being able to do so – and this is because all major messaging platforms such as WhatsApp, iMessage, and Signal are end-to-end encrypted. This means even if the messages on these platforms were monitored or intercepted, the messages would first need to be decrypted using a decryption key for their contents to be read. It is important to understand that the decryption key is stored on the user’s devices and not with platforms, so WhatsApp could not reveal the contents of messages even if it wanted to do so to comply with Rule 4(2). Further, the decryption key is unique between users, and changes over time. So even if a decryption key were acquired, it would reveal the contents of one chat for the limited period that the specific decryption key was used.

Understanding this, the impossibility of the task demanded of SSMIs comes into picture. How does a messaging platform trace a piece of content across thousands, potentially millions of chats (none of which it possesses decryption keys for) to locate the first originator? This tension is borne out in the IT Decryption Rules drafted in 2009. The Rules define “decryption assistance” as “allow access, to the extent possible, to encrypted information”. This is further buttressed by Rule 13(3) of the IT Decryption Rules, which states that “Any direction of decryption of information issued under rule 3 to intermediary shall be limited to the extent the information is encrypted by the intermediary or the intermediary has control over the decryption key.”      

Given that Rule 4(2) of the Intermediary Guidelines expressly states that an order to identify a first originator shall be “as per” the IT Decryption Rules, it may plausibly be argued that an identification order under Rule 4(2) would simply not apply to a platform which does not possess the decryption key. In fact, Facebook has expressly contended that a ‘best efforts’ obligation to assist the government does not contemplate a platform radically modifying its platform to allow the government to trace originators. However, while the Union Government states that it does not want to break end-to-end encryption, it has insisted that platforms are obligated to modify their functionality to enable tracing first originators.

There have been two prominent proposals on how traceability may be achieved without breaking end-to-end encryption. The first proposal was mooted by one Professor Kamakoti and is discussed in Aditi Agrawal’s piece (here). More recently however, anonymous officials from the Ministry of Electronics and IT have argued that a “hash constant” may be used to identify originators.

Hashes

The idea of a hash is to assign every distinct message a unique hash identifier. Briefly, if User P sends the message “I plan to undermine the Indian Government” to User Q, the message is assigned a hash identifier, for simplicity say the identifier is ‘52’. User Q now forwards the message to Users R, S, and T, who go on to send it to hundreds or thousands more until it reaches User M who believes the message to be illegal. Now, an investigative agency can ask the platform to run a search against all messages having the identifier 52, to find when it first appeared – with User P.

In her piece, Aditi notes that this may not work as platforms generate hashes based on: (1) the contents of the messages; and (2) the keys between users, which are constantly changing. Therefore, the message between User P and User R will have a different hash from the same message sent from User P to User T. This means that any one hash would be of limited value as it would disclose identical messages, between two users, sent when a specific decryption key was in use. All other identical messages would have different hashes.

Ironically, if this is not the case, the consequences are far grimmer. Because hashing ties an identifiable value to the contentsof a message (e.g., 52=I plan to undermine the Indian Government), the platform, and consequently the government, could know every user on the platform who has that message on their phone. This is contrary to Rule 4(2) itself, which states that SSMIs shall not be required to disclose the contentsof the message or any information related to other users. (Sidebar | it is entirely conceivable that over time the government shifts from searching for hashes that equal “I plan to undermine the Indian State” to hashes that equal “I don’t like the Indian Government.”)

Constitutional scrutiny

The proportionality test is a cumulative one, and for the sake of brevity I only highlight the most striking issues with Rule 4(2). First, the State bears the onus of demonstrating that the measure (tracing first originators) furthers its stated aims (preventing the incitement of offences against the integrity of India, sexually explicit material etc.). The law recognises that nearly any measure may potentially be useful or desirable for governments to achieve the cessation of crime and ideally, requires that the State demonstrate the measure in question is “necessary” to achieve its stated aims.

Why first originators?

It is unclear how tracing the first originator assists the State in achieving its aims. We cannot assume that the first originator createdthe content. This logic is defeated as Rule 4(2) cannot cover cross-posting; a twitter user could create and upload a video that is subsequently downloaded and shared on WhatsApp – the first originator is not the creator. Rule 4(2) itself rejects the creation rationale by acknowledging that content may be created outside India but sent to India – creating a ‘first receiver’ of sorts. Now if we were to argue that this ‘first receiver’ is facilitating the spread of the illegal content in India, how do we justify overlooking other originators for domestically sourced content? Imagine I send “illegal” content to User X, who forwards it to a group with several thousand users – who is facilitating the spread of illegal content and whom should the law be more focussed on identifying, and how should liability be apportioned between User X and me?   

Further, as Nandan Kamat noted, secondary liability for repeating and disseminating speech varies depending on the offence (public order, defamation, etc.) In some regimes, each re-publication (forward) constitutes a wholly new publication while in other cases liability for repeating content is minimal. The level of due diligence a speaker exercises before sharing content varies widely based on the content and the platform. Context is also crucial. Imagine illegal content is circulating on Platform A and Platform B. On Platform A, the content is being used to incite violence but on Platform B the content is being used to generate counter-speech against violence. As Rule 4(2) states that the contents of the messages cannot be disclosed, how do we differentiate between the originator on the two platforms? The first originator on Platform B may provide context by displaying the contents of her messages, but she should not have to, she should not even be implicated in a criminal proceeding for making constitutionally protected speech. All in all, Rule 4(2) is a blunt instrument most likely to limit the spread of both legal and illegal content by creating a massive chilling effect on users.

Are first originators the first?

Another major issue is that there is a distinction between proving that content first originated from a particular device or user profile and proving that the person who owns the device sent the content. The possibilities for manipulation are endless, ranging from virtual sim-cars linked to foreign numbers that are sold on all major app-stores for as little as ₹100 to picking up somebody’s phone or acquiring remote access privileges. This manipulability and arbitrariness are aggravated by the fact that Rule 4(2) is limited to a single SSMI’s platform (excluding cross platform posting) and the geographic restrictions.

Imagine a piece of “illegal” content is widely circulating on WhatApp (or even better, a smaller messaging service falling below the threshold of an SSMI). User X using a virtual (foreign) sim cross posts it to Telegram by sending it to his mother, and then uses her phone to forward it back to User X’s Indian Telegram. User X now forwards it to a Telegram group with 5,000 users. User X’s mother is the first originator. Therefore, how far the identity of the ‘first originators’ user profile or device can aid in criminal prosecution or curbing misinformation is highly questionable.

Alternative measures

The State must also demonstrate that tracing the first originator is the least intrusive method of achieving its aim among effective alternatives. While there seems to exist some uncertainty within the Union Government how the identification of first originators will be operationalised, the present proposals are particularly intrusive and risk the privacy of other users. An order under the IT Decryption Rules does not require judicial authorisation, and no remedy is provided to users. Because the government itself is a substantial actor on messaging platforms, the necessary independence of identification orders has not been secured. While Rule 4(2) prohibits an identification order from being passed where less intrusive measures exist, there exists no legal structure to guarantee or even scrutinise an incompetent or mala fide claim by an investigative agency that this is actually the case. Further, if hashing were to be employed, basic safeguards such as data retention and expiry are not in place – how long can a hash identifier associated with content be active?

This leaves the Government with a high burden to demonstrate that Rule 4(2) achieves something other measures simply cannot. This is undermined by the fact that mobile platforms already provide the Government a host of ‘basic subscriber data’ allowing the Government to trace users. For example, under the Criminal Procedure Code the Government already requests platforms to provide users’ phone numbers, names, device info, app version, start and end times, last connection, IP and email addressed and web-client data. The Government also has other legal powers such as wiretapping, geo-location, and physical surveillance of suspects. Further, the Government can also use human intelligence to infiltrate and track users on messaging platforms, as reporters have done to investigate the organised spread of misinformation. In summary, the Government has a host of alternative investigative tools while citizens rely almost exclusively on encryption to protect their communications.          

Conclusion

Encrypted communications are a thorny issue world over and law enforcement agencies are lobbying hard to access user messages. But given the importance of encrypted messaging to the autonomy and dignity of citizens, and its centrality to shaping public discourse in India, any restrictions must be strictly scrutinised from the lenses of the rule of law and due process to address the power imbalances that exist between citizens and the State. How Rule 4(2) will be operationalised will have a substantial bearing on its legality. However, as it stands today, the identification of first originators requires weakening the privacy of millions of Indian users to ineptly trace a few potentially bad actors; actors that we are unclear whether we should, or how we will, ultimately hold guilty.