Intermediary Guidelines and the Digital Public Sphere: Balancing the Scales

The last two posts examined prominent new features of the Intermediary Guidelines 2021, automated filtering (here), and tracing first originators (here). However, the undoubted goal of the Guidelines, demonstrated by the express regulation of “Significant Social Media Intermediaries”, is to hold large social media companies accountable for their role in structuring online speech. This post examines the scheme of the Guidelines to examine how effectively they regulate social media.

I begin by examining the liability for communicating unlawful speech to understand why intermediaries are granted immunity for hosting unlawful content. Next, I explain the power imbalance between tech companies and internet users. Finally, I analyze the Intermediary Guidelines’ attempt to remedy this power imbalance by providing various reliefs to users vis-a-vis online platforms.

Speech models and their inadequacy for online platforms

In determining liability for communicative acts, the law distinguishes between publishers and distributors. Publishers are liable for the content they publish, while distributors are typically not liable for content they distribute because they are not deemed to know if they are distributing unlawful content. For example, a newspaper may be liable for an article it solicited/sourced, compiled and edited but a newsstand selling the newspaper will not be held liable because it could not be expected to know the contents of every article in every newspaper it sells. (Volumes could be written on the myriad of nuances within these concepts but acknowledging the distinction will suffice for now.)   

However, this publisher-distributor distinction breaks down in the case of online platforms. They are not neutral distributors as they actively moderate and rank content to determine what users see. But they are also not quite publishers, as they are not responsible for creating the content on their platforms. Unlike a newspaper, whose employees write the articles, on platforms, it is internet users not a platform’s employees that create content. Thus, the publisher-distributor distinction serves legislators poorly when deciding how to regulate online platforms.

Further, legislators worry that if online intermediaries are held liable for unlawful content on their platforms (like a publisher), intermediaries would either: (1) adapt their behaviour to fall under the distributor model and exercise no control over their platforms, leading to the internet being filled with the worst kinds of content; or (2) follow the publisher route and takedown any content that was remotely unlawful, leading to vast horizontal censorship by online intermediaries and defeating the purpose of the decentralised, user driven internet. Enter intermediary immunity.

Intermediary immunity and voluntary moderation

The approach taken by several legislatures, including India’s, has been to provide online intermediaries legal immunity for unlawful content on their platforms even though they filter and curate content. This approach ensures two very important goals: (1) online intermediaries can voluntarily filter content without worrying about a publisher’s liability, thus resulting in less abusive speech online for users; and (2) intermediaries have enough breathing space to not take down user content en masse, resulting in the protection of users’ free speech against horizontal censorship. Rule 3(1)(d) (third proviso) of the Intermediary Guidelines 2021 expressly notes that intermediaries will not lose their status as neutral entities even thoughthey may voluntarily remove a wide range of presumptively illegal content from their platforms.

But this is where things start to get tricky. The definition of ‘presumptively illegal speech’ set out in Rule 3(1)(b) of the Intermediary Guidelines is so broad that platforms have immense discretion in what content to take down within the scope of the rule itself. Under Rule 3(1)(b) intermediaries may takedown content that (inter alia):

(ii) is defamatory, obscene, pornographic, paedophilic, invasive of another‘s privacy, including bodily privacy, insulting or harassing on the basis of gender, libellous, racially or ethnically objectionable, relating or encouraging money laundering or gambling, or otherwise inconsistent with or contrary to the laws in force […]

(viii) threatens the unity, integrity, defence, security or sovereignty of India, friendly relations with foreign States, or public order, or causes incitement to the commission of any cognisable offence or prevents investigation of any offence or is insulting other nation […]

(x) is patently false and untrue, and is written or published in any form, with the intent to mislead or harass a person, entity or agency for financial gain or to cause any injury to any person

These are obviously very broad terms and intermediaries may reasonably disagree with their users over whether content falls within these categories. This creates a conflict between the two goals of intermediary immunity. On the one hand immunity is essential to incentivising platforms to freely take down the worst forms of content without risk of liability, but the wide discretion Rule 3(1)(b) gives them results in users eventually complaining of horizontal censorship (as has already happened on both sides of the political spectrum).

(Note: in Shreya Singhal the Supreme Court ruled that the contents of Rule 3(1)(b) should be limited to the restrictions contained in Art. 19(2) of the Constitution. This may be both normatively undesirable and impractical, as there are several categories of content that fall outside of Art. 19(2) that we may want platforms to voluntarily restrict, e.g., copyright and trademark infringing content, sexually explicit content not rising to the ‘public morality’ threshold, fraudulent content causing consumer harm etc.)

Thus, legislation/regulation must balance the protections granted to intermediaries with the risk of horizontal censorship. This is trickier than it sounds, as the underlying interests of platforms (maximising advertising revenue by the collection and sale of personal data) and users (free speech and privacy) can diverge significantly.   

The problem magnified manyfold

As I discussed in much greater detail (here), the problem of horizontal censorship is multiplied severalfold because of two crucial factors. First, large online social media platforms are central to public discourse. Second, because there are only a handful of these platforms, simply leaving a platform (or worse being suspended) can severely curtail a user’s participation in public discourse. As Kate Klonick poignantly notes, ‘we are dependent on these private platforms to exercise out public rights.’

The true extent of how vulnerable citizens are is demonstrated at several layers. First, as moderation is conducted by private entities, it is undemocratic. While platforms may respond to public outcries, this only ensures that the already marginalised are disproportionately impacted with little hope of recourse. Next, platforms do not disclose their moderation policies. While ‘Terms of Service’ and ‘Community Guidelines’ are publicly available, the rules and procedures platforms use to take down content constantly change and are rarely disclosed. For example, Facebook ‘accidentally’ blocked the hashtag ‘ResignModi’, later reinstating the content with no explanation as to the procedure employed either in the take down or reinstatement. Finally, the stranglehold online platforms have over public discourse may be leveraged by democratic governments to entrench their power by silencing criticism.

Balancing the scales

Just as gender and caste have historically constituted centres of private power, modern social media represents a new site where citizens are dominated. Regulation must seek to remedy this power imbalance but also ensure that the other interest behind intermediary immunity (incentivising voluntary moderation by not saddling platforms with a ‘publisher’s liability) is not annihilated. The Intermediary Guidelines leaves the basic structure of intermediary immunity untouched (except for several additional compliance requirements by social media companies geared towards expanding investigative and enforcement powers) but attempts to remedy the power imbalance by granting users several reliefs vis-à-vis online intermediaries.    

Just as platforms have the discretion to both take down and not take downspeech, internet users may also seek both outcomes. A user may be aggrieved by content and want an intermediary to take down content (the “Take Down Relief”). Alternatively, a user may be aggrieved by the decision of a platform to take down content, and want it reinstated (the “Keep-Up Relief”).

The Take Down Relief

One part of the Take Down Relief involves citizens approaching courts seeking orders (typically in the form of an injunction) against content being displayed online. The Supreme Court in Shreya Singhal ruled that it was impermissible to require online intermediaries to take down content merely because someone complained against a piece of content – a judicial order was necessary (¶117). The decision is in line with the above stated twin goals of intermediary immunity. By protecting platforms from liability until they received a court order, intermediaries could continue to voluntarily remove the worst forms of content from their sites. Crucially, by protecting platforms from being sued for content on their sites until a court ruled the content was (at least prima facie) unlawful, intermediaries would not take down content en masse, thus avoiding horizontal censorship. Courts would weed out frivolous complaints and direct intermediaries to take down content after balancing all the interests involved.

Thus, users do have a Take Down Relief against illegal content in the form of courts and the Intermediary Guidelines 2021 do not interfere with this relief (in fact they expressly recognise it). However, this is by no means a perfect relief, given the time and costs involved in getting a court order. Further, Rule 3(1)(b) is so broad that courts have little guidance (and a lot of discretion) when deciding to take down online content. For example, the Delhi High Court noted that some factors that should be considered prior to take down are (i) the comparative importance of the rights at issue; (ii) the availability of less restrictive measures; (iii) the costs associated with implementing the measures; and (iv) the efficacy of the measures implemented by the ISP. However, another bench of the High Court merely noted that the plaintiffs had demonstrated a prima facie case of defamation and directed the intermediary to block the content. Yet another bench directed search engines to de-index content and permitted a plaintiff to directly approach Internet Service Providers (ISPs) to block additional content uploaded after the original take down order by the court, allowing ISPs to challenge the take down if they believed the plaintiff has exceeded the terms of the original order (which they have no incentive to do).

Rolling back Shreya Singhal

Rule 3(2) of the Intermediary Guidelines 2021 also introduces a new Take Down Relief. A user may launch a complaint where content “exposes the private area of such individual, shows such individual in full or partial nudity or shows or depicts such individual in any sexual act or conduct, or is in the nature of impersonation…” The intermediary is obligated to take down such content within 24 hours of receiving a complaint. On the one hand, the Rule empowers users vis-à-vis social media platforms, and a cogent case may be made that where a user complains over explicit/intimate images of themselves uploaded against their consent, and no other interests are involved, users should not be required to go to court. (Note, they may not even haveto go to court if the intermediary agrees and takes down content once flagged, but that approach puts the relief at the sole discretion of the platform.)

On the other hand, requiring intermediaries to take down content at the behest of a private user was an approach expressly rejected by Shreya Singhal to protect against platforms being swamped by complaints and taking down all content that any other user disagreed with. The simple truth is that hard cases exist. For example, the content complained against may depict two people, one who wants the content to stay online and one who wants it taken down. Now imagine the content is also a matter of public interest. These delicate situations where rights need to be balanced are exactly why Shreya Singhal insisted on court oversight to guard against overzealous complainants and over-cautious platforms.

Efficacy probably lies somewhere between Rule 3(2) and Shreya Singhal and the Guidelines could have: (i) distinguished between cases where interests need to be balanced and where they do not be; (ii) set up an independent body/fast track process to hear take down claims; or (iii) narrowed judicial discretion on when content should be taken down. However, the Guidelines ultimately fails to either improve the court order system of Shreya Singhal but also creates a blanket workaround that may lead to content being taken down at the sole behest of a disgruntled user in delicate cases where rights ought to have been balanced by a court.   

The Keep-Up Relief

Users may also have a grievance when platforms take down content that users believe should have stayed up (e.g., see the suspension of Sanjay Hegde’s Twitter account over: (i) the use of August Landmesser’s photo defying the Nazi salute; and (ii) retweeting a poem by a CPI (Marxist-Leninist) politician). As discussed above, while users can review ‘Terms of Service’ and ‘Community Guidelines’, the actual rules and procedures employed to take down content are not transparent.

Prior to the Intermediary Guidelines 2021, a user could pursue a contractual remedy, arguing that the speech did not violate the platform’s ‘Terms of Service’. Such remedies are unlikely to succeed as contracts are typically skewed in favour of platforms and confer broad discretion to take down content. For example, Facebook’s Terms of Service state that it can “remove or restrict access to your content, services or information if we determine that doing so is reasonable necessary to avoid or mitigate adverse legal or regulatory impacts to Facebook.”

Hedge is presently pursuing a public law remedy under Article 226, arguing that Twitter performs a social/public function and should thus be required to respect his free speech (Art. 19) and non-discrimination (Art. 14) rights. Without going into the merits of this argument, such constitutional gymnastics could be avoided by legislation/regulation that directly gives users recourse against social media platforms. Enter the Intermediary Guidelines 2021.

Before we begin, it should be noted that the Intermediary Guidelines 2021 risks substantially aggravating the problem of intermediary take down itself, by incentivising/mandating social media platforms to employ automated filtering technologies. The imprecision of such technologies is likely to cause more users than ever to seek out a Keep-Up Relief. (See my previous post on the subject here.)

Rule 4(8) of the Guidelines attempts to provide such a Keep-Up Relief. The Rule states that where a Significant Social Media Intermediary voluntarily disables content, the platform must, prior to the disabling of content, provide the creator/poster of the content a “notification explaining the action being taken and the grounds or reasons for such action”. Next, the platform must also provide the user with a “reasonable opportunity” to dispute the platform’s decision and request the reinstatement of the content. This creates a regime where intermediaries must provide reasoned notice and meaningful review to internet users when content is taken down.

On the face of it, this is a step in the right direction as it forces social media platforms to employ some minimum rationality and allows users a chance to contest take down decisions. But how well will this structure of accountability hold up when confronted with hard cases (which it undoubtedly will be)? First, the obligation merely requires social media platforms to provide an opportunity for users to dispute the decision and does not improve on the standards of content evaluation to be applied by them – recall that Rule 3(1)(b) is broad and does not meaningfully restrict the platform’s discretion in decision making. Judges themselves struggle to impose uniform standards on when content should be taken down. While platforms may seek to ensure a measure of uniformity, unlike judicial decisions, the reasoning by platforms under Rule 4(8) is not required to be publicly available and no system of precedent applies – creating a risk of opaque and haphazard decisions.  

The Achilles heel of Rule 4(8)

It is also important to understand that the Intermediary Guidelines 2021 regulate the conditions for intermediary immunity, i.e., the conditions they must satisfy to not be treated as a publisher and sued for hosting unlawful content. Now, even if an intermediary breaches the notice and hearing obligation under Rule 4(8), the consequence is a loss of this legal immunity for hosting unlawful content. But the intermediary is not hosting the content (indeed the whole dispute is because the intermediary is refusing to host the content), so there is no risk for the intermediary to losing this legal immunity. Simply put, intermediaries cannot be sued for content they are not hosting, so there is no legal risk associated with arbitrary voluntary take downs. This highlights the folly of trying to offer users recourse to moderation decisions through conditions of intermediary immunity, which is what the Intermediary Guidelines are. 

Lastly, Rule 4(8) does not open the door for users to go to more legitimate body and argue for content to be reinstated. In fact, if courts were to intervene under Rule 4(8), this would undermine the structure of Section 79 and the Intermediary Guidelines, which are premised on treating intermediaries as neutral even though they voluntarily take down content. In the case of Take Down Reliefs, where Person A is aggrieved by content posted by Person B and asks the Court to direct Intermediary X to stop publishing the content, a Court can provide this relief while still upholding the legal fiction of intermediary neutrality (i.e. accepting Intermediary X’s decision to not take down the content as it is neutral like a newsstand). Section 79, the Guidelines, and Shreya Singhal are premised on not examining the correctness of Intermediary X’s decision to keep the content up.

However, where a platform has taken down content and the user seeks a Keep-Up Relief, Person A would sue Intermediary X for its decision to take down Person A’s post. A court cannot logically grant relief without: (i) acknowledging that Intermediary X is not neutral, in which case it really is more like a newspaper and should be liable for its moderation decision; and (ii) going against the protection for voluntary moderation expressly provided in the Intermediary Guidelines; and/or (iii) engaging in the sort of constitutional and interpretative gymnastics that the Sanjay Hedge case involves. All this leaves the efficacy of the Rule 4(8) Keep-Up Relief at the mercy of large social media companies, and their internal calculus as to whether offering users such recourse will garner them goodwill (see Facebook’s Oversight Board).

Conclusion

The problem the government sought to address with the introduction of the Intermediary Guidelines is a real one. Large social media companies have unbridled power over what users see online at a time when we increasingly live our lives on the internet. This problem cannot be solved unless regulation accepts two significant realities, (i) that intermediary immunity remains central to incentivising intermediaries to moderate content and protecting users against horizontal censorship; and (ii) the current moderation procedures employed by platforms lack transparency, accountability, legal certainty, and disproportionately impact marginalised communities.

By relying solely on the structures of intermediary immunity, the Intermediary Guidelines demonstrates a singular lack of creativity when it comes to regulating online platforms. One alternative approach would be a separate statute with certain minimum standards of conduct for intermediaries (no easy task, but at least one that begins with an acceptance of the problems that need to be solved). As a result, the new Guidelines not only fail to provide users with efficacious remedies to either take down or keep up online content, but it perpetuates the status quo where online platforms are free to exercise arbitrary and dominating power over Indian citizens – with the only meaningful regulatory intervention occurring through government blocking of content when platforms fail to toe the government line.   

The author is grateful to Praharsh Johorey for his comments on the draft of this post.

Intermediary Guidelines and the Digital Public Sphere: Tracing first originators

The previous post (here) set out how social media companies are key facilitators of public discourse, and structure the digital public sphere. The Intermediary Guidelines distinguish between ordinary intermediaries and ‘Significant Social Media Intermediaries’ (“SSMIs”) and Rule 4 sets out “due diligence” obligations that SSMIs must satisfy to avail of legal immunity for content shared on their platforms. In other words, a violation of Rule 4 of the Intermediary Guidelines does not itself impose liability on SSMIs, but it exposes them to a significant risk of liability given the large volumes of content being transacted on their platforms.

This post examines the requirement that SSMIs providing messaging services identify the “first originator” of specific content on their platforms pursuant to judicial or government orders. I begin by setting out the content of the requirement. Next, I briefly examine the role of secure communications and anonymity under the Indian Constitution. I then set out the technical proposals as to how a first originator may be identified and finally evaluate whether Rule 4(2) would survive constitutional scrutiny.

The ‘Traceability’ Requirement

Rule 4(2) obligates SSMIs that are “providing services in the nature of messaging” (think WhatsApp, Signal, Telegram, and iMessage) to “enable the identification of the first originator of the information on its computer resource”. SSMIs are required to comply with this obligation in two situations;

(1) where a judicial order is passed; or

(2) where an order is passed under Section 69 of the IT Act and the Information Technology (Procedure and Safeguards for interception, monitoring and decryption of information) Rules, 2009 (“IT Decryption Rules”).

The IT Act defines an “originator” as anybody who generates, transmits, or stores content. The effect of the rule is to enable the identification of the first user profile on a computer resource to generate, transmit or store a specific piece of information. While Rule 4(2) postulates a judicial order ordering identification, it does not mandate it. Orders under Section 69 are passed by senior civil servants, so there is no meaningful check on executive power. Further, the Union Government insists this is a measure to fight illegal content that has widespread reach; however, Rule 4(2) itself contains no threshold for ‘virality’ and could in principle apply to any content that was shared more than once. If there is more than one “originator”, there is de-facto a “first originator”

Rule 4(2) includes three safeguards and creates one legal presumption. First, an identification order may only be passed for the purposes of “prevention, detection, investigation, prosecution or punishment” of offences “related to” the sovereignty, integrity, or security of India, friendly relations with foreign states, public order, or the incitement of offences relating to any of these headings but also rape, sexually explicit material, or child sexual abuse. Second, an identification order cannot be passed where a less intrusive means to identify the first originator exists. Third, no SSMI is required to disclose the “contents of any electronic message or any other information related to the first originator, or any information related to its other users

Finally, Rule 4(2) also states that if the first originator of content on the messaging platform is located outside India, the first originator within India (i.e., the first person who generates, stores, or transmits the content in India) “shall be deemed” to be the first originator with respect to that content.

Privacy and Proportionality in India

In the last post we examined how social media companies constitute the digital public sphere. This is borne out empirically in the case of messaging platforms as well. In a recent study conducted by the Reuters Institute and the University of Oxford, 52% of Indian respondents reported getting their news via WhatsApp. 60% clicked on news links, 46% posted or shared news on the platform, and 39% took part in group or private discussions. Messaging platforms facilitate public discourse and allow citizens to shape public opinion, perhaps best demonstrated by the high levels of political content on these platforms. Anonymity and security thus form crucial barriers against political speech being chilled.

Messaging platforms also allow individuals to share constitutionally protected but socially stigmatised views, ensuring individual autonomy and dignity. It allows people to securely discover and express themselves, and effectively organise with other citizens to create centres of countervailing power. As the former UNHRC Special Rapporteur noted, being protected from the public gaze may allow citizens to discover and share ideas they may otherwise be persecuted for. “The ability to search the web, develop ideas and communicate securely may be the only way in which many can explore basic aspects of identity, such as one’s gender, religion, ethnicity, national origin or sexuality.” However, the security provided by privacy is especially fragile. Courts have recognised that where even the threat of surveillance exists without a remedy, there exists an interference with a citizen’s privacy.

Almost two decades ago, the Supreme Court in PUCL recognised that Indians have a constitutionally guaranteed right to communicate privately. In Puttaswamy,the Court articulated a vision of privacy grounded in individual autonomy that interacted and enabled the enjoyment of other rights guaranteed by the Constitution, most notably the right to freely and privately hold and express opinions, and associate with other citizens (¶412). In other words, privacy forms a necessary foundation to the enjoyment of the rights and privileges guaranteed by the Constitution. The Indian Constitution thus guarantees private and secure communications to both protect individual autonomy and facilitate democratic self-governance.   

Any infringement on a citizen’s right to communicate privately must therefore satisfy the test of proportionality: (1) the infringing measure must pursue a legitimate state aim; (2) the measure must substantially further the state aim; (3) the measure must be the least restrictive option amongst equally effective alternatives; and (4) the measure must not have a disproportionate impact on rights holders.

Rulemaking power

Before we examine the issue of privacy and encrypted messages, there exist a preliminary issue of the very power to frame such a rule. The prefatory text to the Intermediary Guidelines notes that the Guidelines are issued under the powers granted to the Union Government by Sections 87(2)(z) and 87(2)(zg) of the IT Act. The former grants the Union Government power to frame web-site blocking rules and the latter grants power to frame rules to regulate the immunity granted to intermediaries. In short, neither of the sub-clauses relate to monitoring or tracing content on computer networks. The government may argue that Rule 4(2) forms legitimate regulation of intermediary immunity, but this is belied by the fact that the IT Act itself grants the government to monitor and decrypt content in a separate and independent provision, namely Section 69. Section 69 has its own rule-making provision, Section 87(2)(y), and the government has already framed the IT Decryption Rules under this section.   

Operationalising ‘Traceability’

There exists a gap between Rule 4(2) mandating SSMIs to identify the first originator and the platforms being able to do so – and this is because all major messaging platforms such as WhatsApp, iMessage, and Signal are end-to-end encrypted. This means even if the messages on these platforms were monitored or intercepted, the messages would first need to be decrypted using a decryption key for their contents to be read. It is important to understand that the decryption key is stored on the user’s devices and not with platforms, so WhatsApp could not reveal the contents of messages even if it wanted to do so to comply with Rule 4(2). Further, the decryption key is unique between users, and changes over time. So even if a decryption key were acquired, it would reveal the contents of one chat for the limited period that the specific decryption key was used.

Understanding this, the impossibility of the task demanded of SSMIs comes into picture. How does a messaging platform trace a piece of content across thousands, potentially millions of chats (none of which it possesses decryption keys for) to locate the first originator? This tension is borne out in the IT Decryption Rules drafted in 2009. The Rules define “decryption assistance” as “allow access, to the extent possible, to encrypted information”. This is further buttressed by Rule 13(3) of the IT Decryption Rules, which states that “Any direction of decryption of information issued under rule 3 to intermediary shall be limited to the extent the information is encrypted by the intermediary or the intermediary has control over the decryption key.”      

Given that Rule 4(2) of the Intermediary Guidelines expressly states that an order to identify a first originator shall be “as per” the IT Decryption Rules, it may plausibly be argued that an identification order under Rule 4(2) would simply not apply to a platform which does not possess the decryption key. In fact, Facebook has expressly contended that a ‘best efforts’ obligation to assist the government does not contemplate a platform radically modifying its platform to allow the government to trace originators. However, while the Union Government states that it does not want to break end-to-end encryption, it has insisted that platforms are obligated to modify their functionality to enable tracing first originators.

There have been two prominent proposals on how traceability may be achieved without breaking end-to-end encryption. The first proposal was mooted by one Professor Kamakoti and is discussed in Aditi Agrawal’s piece (here). More recently however, anonymous officials from the Ministry of Electronics and IT have argued that a “hash constant” may be used to identify originators.

Hashes

The idea of a hash is to assign every distinct message a unique hash identifier. Briefly, if User P sends the message “I plan to undermine the Indian Government” to User Q, the message is assigned a hash identifier, for simplicity say the identifier is ‘52’. User Q now forwards the message to Users R, S, and T, who go on to send it to hundreds or thousands more until it reaches User M who believes the message to be illegal. Now, an investigative agency can ask the platform to run a search against all messages having the identifier 52, to find when it first appeared – with User P.

In her piece, Aditi notes that this may not work as platforms generate hashes based on: (1) the contents of the messages; and (2) the keys between users, which are constantly changing. Therefore, the message between User P and User R will have a different hash from the same message sent from User P to User T. This means that any one hash would be of limited value as it would disclose identical messages, between two users, sent when a specific decryption key was in use. All other identical messages would have different hashes.

Ironically, if this is not the case, the consequences are far grimmer. Because hashing ties an identifiable value to the contentsof a message (e.g., 52=I plan to undermine the Indian Government), the platform, and consequently the government, could know every user on the platform who has that message on their phone. This is contrary to Rule 4(2) itself, which states that SSMIs shall not be required to disclose the contentsof the message or any information related to other users. (Sidebar | it is entirely conceivable that over time the government shifts from searching for hashes that equal “I plan to undermine the Indian State” to hashes that equal “I don’t like the Indian Government.”)

Constitutional scrutiny

The proportionality test is a cumulative one, and for the sake of brevity I only highlight the most striking issues with Rule 4(2). First, the State bears the onus of demonstrating that the measure (tracing first originators) furthers its stated aims (preventing the incitement of offences against the integrity of India, sexually explicit material etc.). The law recognises that nearly any measure may potentially be useful or desirable for governments to achieve the cessation of crime and ideally, requires that the State demonstrate the measure in question is “necessary” to achieve its stated aims.

Why first originators?

It is unclear how tracing the first originator assists the State in achieving its aims. We cannot assume that the first originator createdthe content. This logic is defeated as Rule 4(2) cannot cover cross-posting; a twitter user could create and upload a video that is subsequently downloaded and shared on WhatsApp – the first originator is not the creator. Rule 4(2) itself rejects the creation rationale by acknowledging that content may be created outside India but sent to India – creating a ‘first receiver’ of sorts. Now if we were to argue that this ‘first receiver’ is facilitating the spread of the illegal content in India, how do we justify overlooking other originators for domestically sourced content? Imagine I send “illegal” content to User X, who forwards it to a group with several thousand users – who is facilitating the spread of illegal content and whom should the law be more focussed on identifying, and how should liability be apportioned between User X and me?   

Further, as Nandan Kamat noted, secondary liability for repeating and disseminating speech varies depending on the offence (public order, defamation, etc.) In some regimes, each re-publication (forward) constitutes a wholly new publication while in other cases liability for repeating content is minimal. The level of due diligence a speaker exercises before sharing content varies widely based on the content and the platform. Context is also crucial. Imagine illegal content is circulating on Platform A and Platform B. On Platform A, the content is being used to incite violence but on Platform B the content is being used to generate counter-speech against violence. As Rule 4(2) states that the contents of the messages cannot be disclosed, how do we differentiate between the originator on the two platforms? The first originator on Platform B may provide context by displaying the contents of her messages, but she should not have to, she should not even be implicated in a criminal proceeding for making constitutionally protected speech. All in all, Rule 4(2) is a blunt instrument most likely to limit the spread of both legal and illegal content by creating a massive chilling effect on users.

Are first originators the first?

Another major issue is that there is a distinction between proving that content first originated from a particular device or user profile and proving that the person who owns the device sent the content. The possibilities for manipulation are endless, ranging from virtual sim-cars linked to foreign numbers that are sold on all major app-stores for as little as ₹100 to picking up somebody’s phone or acquiring remote access privileges. This manipulability and arbitrariness are aggravated by the fact that Rule 4(2) is limited to a single SSMI’s platform (excluding cross platform posting) and the geographic restrictions.

Imagine a piece of “illegal” content is widely circulating on WhatApp (or even better, a smaller messaging service falling below the threshold of an SSMI). User X using a virtual (foreign) sim cross posts it to Telegram by sending it to his mother, and then uses her phone to forward it back to User X’s Indian Telegram. User X now forwards it to a Telegram group with 5,000 users. User X’s mother is the first originator. Therefore, how far the identity of the ‘first originators’ user profile or device can aid in criminal prosecution or curbing misinformation is highly questionable.

Alternative measures

The State must also demonstrate that tracing the first originator is the least intrusive method of achieving its aim among effective alternatives. While there seems to exist some uncertainty within the Union Government how the identification of first originators will be operationalised, the present proposals are particularly intrusive and risk the privacy of other users. An order under the IT Decryption Rules does not require judicial authorisation, and no remedy is provided to users. Because the government itself is a substantial actor on messaging platforms, the necessary independence of identification orders has not been secured. While Rule 4(2) prohibits an identification order from being passed where less intrusive measures exist, there exists no legal structure to guarantee or even scrutinise an incompetent or mala fide claim by an investigative agency that this is actually the case. Further, if hashing were to be employed, basic safeguards such as data retention and expiry are not in place – how long can a hash identifier associated with content be active?

This leaves the Government with a high burden to demonstrate that Rule 4(2) achieves something other measures simply cannot. This is undermined by the fact that mobile platforms already provide the Government a host of ‘basic subscriber data’ allowing the Government to trace users. For example, under the Criminal Procedure Code the Government already requests platforms to provide users’ phone numbers, names, device info, app version, start and end times, last connection, IP and email addressed and web-client data. The Government also has other legal powers such as wiretapping, geo-location, and physical surveillance of suspects. Further, the Government can also use human intelligence to infiltrate and track users on messaging platforms, as reporters have done to investigate the organised spread of misinformation. In summary, the Government has a host of alternative investigative tools while citizens rely almost exclusively on encryption to protect their communications.          

Conclusion

Encrypted communications are a thorny issue world over and law enforcement agencies are lobbying hard to access user messages. But given the importance of encrypted messaging to the autonomy and dignity of citizens, and its centrality to shaping public discourse in India, any restrictions must be strictly scrutinised from the lenses of the rule of law and due process to address the power imbalances that exist between citizens and the State. How Rule 4(2) will be operationalised will have a substantial bearing on its legality. However, as it stands today, the identification of first originators requires weakening the privacy of millions of Indian users to ineptly trace a few potentially bad actors; actors that we are unclear whether we should, or how we will, ultimately hold guilty.    

Intermediary Guidelines and the Digital Public Sphere: Automated Filtering

In a previous post (here), I had argued that focusing on intermediary liability to regulate social media companies would likely weaken free speech and further entrench the power of large technology companies. However, with the adoption of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“Intermediary Guidelines”), the government has adopted a liability centred approach – so let us take stock.

As the name suggests, the Intermediary Guidelines traverse well beyond the regulation of online intermediaries and raise several issues as discussed by the IFF (here and here) and the SFLC (here). The Guidelines are currently subject to a comprehensive legal challenge before the High Court of Kerala. At their core, the Guidelines change the obligations imposed on online intermediaries in ways that may fundamentally alter the public sphere. This post focusses on the obligation of social media companies to use ‘automated filtering mechanisms’ to weed out ‘illegal’ content.      

The Context

Content on the internet typically concerns three parties: (1) ‘content originators’ who post or share content; (2) online intermediaries who provide a platform for users to share content on; and (3) ‘content recipients’ who view the content posted by content originators. Conceptually, it is important to distinguish between websites that post their own content (a news organisations website), and online intermediaries that merely provide a platform (WordPress, which allows bloggers and readers to interact but has nothing to do with the content of individual blogs). When (inevitably) the material posted by a content originator violates the law, it raises the question of whether the online intermediary should be held secondarily liable for its role in spreading the illegal content.

Section 79(1) of the IT Act states that an online intermediary shall not be liable for this facilitative role. This ensures that intermediaries do not have to scrutinise each user post for potential illegality/liability that may be foisted on them, ensuring the free flow of user content on platforms such as Facebook and Twitter. However, given the wide reach of the internet, governments also typically want intermediaries to act as gatekeepers against some of the worst forms of content. As a result, in India, the immunity offered to intermediaries by Section 79(1) requires intermediaries to satisfy certain obligations. Crucially, under Section 79(2)(c), online intermediary must “observe due diligence”. This is where the Intermediary Guidelines become relevant, as Rules 3 and 4 of the Guidelines prescribe the ‘due diligence’ obligations of online intermediaries. In other words, if intermediaries want to avail of the legal immunity for any illegal content on their platforms, they must satisfy the conditions of Rule 3 and 4 of the Intermediary Guidelines.   

Scheme of the Guidelines

Rules 3 and 4 of the Intermediary Guidelines impose a system of differentiated obligations for ordinary online intermediaries and ‘Significant Social Media Intermediaries’ (“SSMIs”). As expected, the latter have significantly more onerous responsibilities to fulfill to avail of legal immunity. While this post focuses solely on SSMIs, it is worth spending a moment on this distinction. The Guidelines define an SSMI as an intermediary which: (i) has more “registered users in India” than a limit set by the Union Government; and (ii) primarily or solely enables online interactions between users and allows them to access, upload, and share content. The Union Government has since stated that the threshold for an SSMI is 5,000,000 “registered users in India”.

As a practical issue, it is unclear exactly how “registered users in India” will be calculated. Geographic locations are easily spoofed on the internet and the requirement is bound to create a few borderline cases involving some arithmetic gymnastics to avoid being classified as a SSMI. To add to this, platforms such as Signal and Telegram do not release country specific userbase data.

Taking a step back, it is also worth examining the trend to impose additional monitoring and filtering responsibilities on large social media companies. The increased burden on SSMIs to moderate their platforms is a tacit admission that these platforms form the bedrock of public discourse or a ‘digital public sphere’. In addition to cat videos, these platforms facilitate communication between citizens and the political centre of self-governance. Thus, these platforms aid democratic legitimating by allowing citizens to participate and shape public opinion. Suspending a citizen’s social media account is the modern-day equivalent of gagging them at a town hall. Simply put, SSMIs perform a very public function in our democracy and thus their regulation must balance the government interests in preventing harm in the public sphere (e.g., defamatory, or presumptively illegal content), with the free expression rights of internet users (included here is the right to access/receive information).  

Proactive Filtering Requirement

Rule 4 sets out the “due diligence” obligations of SSMIs. Rule 4(4) states that an SSMI:

shall endeavour to deploy technology-based measures, including automated tools or other mechanisms to proactively identify information that depicts any act or simulation in any form depicting rape, child sexual abuse or conduct, whether explicit or implicit, or any information which is exactly identical in content to information that has previously been removed or access to which has been disabled on the computer resource of such intermediary under clause (d) of sub-rule (1) of rule 3, and shall display a notice to any user attempting to access such information stating that such information has been identified by the intermediary under the categories referred to in this sub-rule.

The obligation in Rule 4(4) thus creates a best efforts obligation on SSMIs to utilise automated tools to proactively identify two classes of content. First, content that explicitly or implicitly depicts rape or child sexual abuse, and second, content that is exactly identical to content that has been taken down pursuant to Rule 3(1)(d). Under Rule 3(1)(d), content may be taken down pursuant to a court order or a government order (as noted here, there are virtually no limits on what the government may block). Although the provision does not expressly require an intermediary to block access to a page identified as hosting such content, absent a clear government clarification that users will still be permitted to view the flagged content, we may presume that the intermediary’s notice flagging the content will replace the web-page’s actual content – effectively blocking the content. (Sidebar | As takedowns under court or government orders themselves are haphazard and regularly not disclosed to the public, it is unclear what “categories” the user-facing notice will use, drastically reducing its usefulness as a transparency aid.)

Overcompliance and Monitoring

Rule 4(4) provides intermediaries a choice, they must either monitor their platforms for select types of illegal content and replace it with red-flag, or alternatively, they can forgo the legal immunity offered by Section 79(1) and risk being held secondarily liable for facilitating the spread of illegal content. Despite the musings of certain commentators, the impact of giving intermediaries such a choice should be obvious. As Jack Balkin notes, this approach ‘raises problems of collateral censorship and digital prior restraint.’ In other words, conditioning legal immunity on the effectiveness of a platform’s monitoring and takedown of illegal content is bound to make platforms overly cautious. They are likely to takedown content that could potentially be illegal but may not necessarily be so if adjudicated upon. As Balkin notes, this phenomenon is inherent to systems of intermediary liability as intermediaries have no incentives to protect the speech of unknown third-party users over their own coffers.

The result is that intermediaries may restrict a user’s post purely to avail the legal immunity provided by Section 79(1) – because they do not want to take the risk that if the post is later found to be illegal and they had failed to take it down, they could be held liable for disseminating it. Crucially, an intermediary may takedown/block speech that a user could tenably claim is constitutionally protected speech, but the user’s claim does not sufficiently alter the risk calculus of the intermediary, which operates under the outsized threat of liability. This problem is aggravated by the oddity that users do not have free speech rights vis-à-vis online platforms (private companies) yet these platforms undeniably constitute part of the digital public sphere.

To avoid this very problem, the European Union expressly prohibits imposing a ‘general monitoring obligation’ on intermediaries to weed out illegal activity on their platforms (see Art. 15 E-Commerce Directive). In the context of requiring intermediaries to weed out copyright infringing material from their platforms, the European Court of Justice ruled as far back as 2012 that a general monitoring obligation was incompatible with the free speech rights of users as it may lead to legal content being caught in the dragnet of a web-filter. The court held that a general monitoring obligation exists where intermediaries had to install filters that: (i) were targeted at user content; (ii) applied indiscriminately to all users; (iii) were installed as a preventive measure; (iv) at the sole expense of the intermediary; and (v) for an unlimited period of time. Needless to say, Rule 4(4) ticks all the boxes of an unlawful general monitoring obligation.

Using Automated Filters

The use of automated systems is not new. Most large social media platforms already voluntarily use a host of automated tools to filter content. However, these are by no means perfect. Between Facebook’s takedown of iconic war photography, to Twitter’s suspension of an Indian lawyer’s account over the use of August Landmesser’s photo defying the Nazi salute – the mistakes of automated systems are well documented. Put simply, at their present state of development, automated systems are poor at evaluating cultural context, irony, or the critical analysis necessary to discern between illegal and legal content. Now consider the types of content Rule 4(4) requires such automated tools to block, ‘explicit or implicit depictions of rape and child sexual abuse’. Automated systems are unlikely to accurately determine the various layers of nuance associated with such content which may range from actual footage of child sex abuse to a critical commentary on sexual violence.

This issue is exponentially aggravated when the second target for automated filters is considered. Content may be taken down under Rule 3(1)(d) for a host of legal and legal but unconstitutional reasons. A court may pass an order to takedown defamatory content under Rule 3(1)(d). The Government may also pass an overbroad order blocking large swathes of content under Rule 3(1)(d). The effect is to create an ever-growing database of illegal content (infinitely more complex than even depictions of rape or child sex abuse) that automated systems must continually weed out. Furthermore, under Rule 3(1)(d) specific URLs are restricted. If an automated system is unable to discern exactly what content on that web-page was illegal, there exists a significant chance of collateral censorship as all content on that originally blocked page may be flagged as illegal and automatically blocked going forward. Finally, there are no safeguards to ensure that once content is no longer restricted under Rule 3(1)(b) (if a court modifies or the government withdraws an order), automated systems are immediately turned off.

Although Rule 4(4) uses the term “proactively”, it is unclear whether SSMI’s will be obligated to restrict content prior to publication even. A filtering system that restricts content prior to publication fundamentally privileges the interests in favour of restraining speech. In other words, it treats all speech identified by the filter as presumptively illegal, contrary to settled free speech doctrine. Lastly, even if the content is published, a user must be given the right to contest the application of the filter before it is taken down. While Rule 4(8) of the Intermediary Guidelines does postulate an ex-ante dispute resolution mechanism, it does not apply to content taken down pursuant to the automatic filtering under Rule 4(4). This is amplified when the relevance of the content is time sensitive, and no content-recipient has been identifiably injured. Famously, in Shreya Singhal, the Supreme Court struck down a requirement that intermediaries takedown content at the behest of a private complaint absent judicial adjudication. Where this leaves a requirement that intermediaries takedown content even prior to a private complaint will have to be seen.

Privacy

The use of automated filters also raises issues surrounding user privacy. Imposing a general monitoring obligation on intermediaries to constantly weed out “illegal” content requires them to conduct an ongoing analysis on all user content posted on their platforms. This exercise is likely to lead to a large dataset that links user profiles to any “illegal” content posted by them. Consider a user profile which repeatedly posted content that violated a ‘depictions of rape’ filter or ‘farmer protests’ filter – the intermediary would be in possession of this information which may: (i) be requisitioned by the government; or (ii) be hacked or leaked and made available to the public devoid of any context.

Parchment Barriers

The Union Government is not unaware of these harms, and Rule 4(4) includes three safeguards aimed at preventing these issues. First, the measures taken by intermediaries must be “proportionate” and must consider the ‘interests of free speech and privacy of users.’ Second, intermediaries shall implement “mechanisms of human oversight … including a periodic review of any automated tools deployed”. Lastly, the intermediary “shall evaluate the automated tools having regard to the accuracy and fairness of such tools, the propensity for bias and discrimination in such tools, and the impact on privacy”.

On paper, these are excellent considerations to have when deciding whether to use automated filtering. However, as regulatory text, it lacks the necessary specificity or enforceability to reign in the dangers of automated systems. None of the three safeguards provide any indication as to when an intermediary is in breach of them. How much oversight and review are necessary to maintain immunity? What is the level of efficacy the automated systems must achieve and what is a tolerable error rate? What forms of data collection for the purposes of such systems are impermissible? Additionally, the Guidelines and the IT Act fail to provide any form of regulatory oversight. There is no dedicated body to police these incredibly complicated systems being unleashed on Indian users, meaning that in the real world – compliance will be almost impossible to ensure on an ongoing basis.

Automated systems are not unregulatable. There exist several real world proposals on how to monitor such systems such as vetting the source code, black boxes, and perhaps most promisingly – ‘tinkering’. This involves systematically testing how such filters respond to sample content on an ongoing basis. However, such solutions require a regulatory framework and committed agencies. The safeguards set out in Rule 4(4) are more akin to parchment barriers – laudable on paper but easily circumventable by both intermediaries and the government in practice.

The so-called safeguards also raise the spectre of a more sinister threat that is now well-established in India’s legal system: that of ‘discriminatory legalism’. Simply put, the legal uncertainty and lack of a dedicated and independent agency to oversee SSMI filtering opens the door for the government to selectively prosecute SSMIs for either violating the filtering obligation or violating the safeguards. For example, if the SSMI does not takedown content unfavourable to the ruling party, a violation of the web-filtering may be alleged, and if the SSMI takes down content favourable to the ruling party, a violation of the safeguards may be alleged.  

Conclusion

This is the first post of a series examining the impact of the new Intermediary Guidelines on the digital public sphere in India. Requiring SSMIs to take an increased role in content moderation of public discourse is not an unreasonable demand. The deregulatory atmosphere in which online intermediaries emerged means that private companies now perform the core democratic function of facilitating public discourse and regulation must recognise this. The large volume of content on these platforms also means that the use of automated filters is inevitable.

However, this set of posts critically examines whether the form of regulation adopted by the Intermediary Guidelines is the best way of ensuring a vibrant digital public sphere. Making intermediaries’ legal immunity conditional on them ceaselessly monitoring content fundamentally alters their risk calculus towards content takedown. The lack of well-conceived regulatory framework to address the risks of automated filtering by social media companies leaves Indian citizens more at the mercy of large tech companies and government interference than ever. A diametrically opposite, but equally feasible approach suggested by Martin Husovec is to fine intermediaries for over-zealously taking down content. This incentivises intermediaries to invest in improving their content removal systems, as opposed to India’s approach which, at best, allows the social cost of lost speech to go unaddressed – and at worst, incentivises intermediaries to takedown more content than ever.

Social Media & Intermediary Liability: Missing the Forest for the Trees?

Recent events have once again brought into focus the question of imposing legal liability on online intermediaries and, particularly social media companies. In the United States, Twitter’s decision to ‘flag’ President Trump’s tweet disparaging vote-by-mail procedures as inaccurate prompted the President to issue an executive order re-considering the qualified immunity granted to intermediaries (here). In India, Twitter voluntarily and independently ‘disabled’ two tweets by advocate Prashanth Bhushan upon the initiation of contempt proceedings against the lawyer (here). This, while India is currently in the process of amending its rules under the Information Technology Act (“IT Act”) regulating online intermediaries (the “Intermediary Guidelines”).

The need to shield online intermediaries from liability to protect freedom of expression on the internet is well established. India’s new regulation seeking to make intermediaries monitor and take-down content is a step back in this respect. But the proposed guidelines aside, in this post I argue that a regulatory focus on intermediary liability by the government ignores several larger structural issues with speech on the internet (especially on social media websites) and potentially hampers more robust and multi-faceted regulatory approaches. I begin by briefly setting out India’s intermediary regime (both existing and proposed) and the need to shield intermediaries from immunity. I then attempt to sketch out the role of large social media companies in structuring speech on the internet and how an undue focus on intermediary liability further empowers already powerful actors at the cost of internet consumers and free speech. Without going so far as ‘breaking up big tech’, I explore possibility regulatory measures that can counteract the power of social media companies over users’ speech.

Intermediary Immunity Grossly Simplified

Given the decentralised nature of the internet, online intermediaries have long been targets for legal liability for allegedly unlawful speech on the internet. Traditionally a “tort law principle of secondary liability for third party action” is applied against intermediaries. Simply put, a website may be sued for hosting unlawful content even though the website itself did not create or post the content (secondary liability), the unlawful content actually having been created and posted by an anonymous web-user (third party action or content).

Government’s however, quickly recognised that exposing intermediaries to this liability may make them extremely cautious and cause them to start deleting all third-party content that carries even a slight risk of legal liability. Not ideal for online business or free speech. Therefore, governments provided “immunity” or “safe harbour” to intermediaries except in narrowly defined situations. For example, Section 79 of the IT Act provides online intermediaries legal immunity for unlawful third party content if: (i) the content is merely temporarily stored or transmitted on the site; or (ii) if the intermediary takes down the content upon receiving “actual knowledge” of the unlawful content or upon being notified by the Government; or (iii) compliance with the Intermediary Guidelines more generally.

In an exceedingly rare moment of clarity, the Indian Supreme Court in Shreya Singhal held that online intermediaries could not be tasked with determining when content was legal or not, and “actual knowledge” meant a court order directing the take-down of unlawful content. In other words, intermediaries would only ever be subject to legal liability if a court of law directed them to take-down content and they still refused to do so. (Note: this makes Twitter’s “disabling” of Prashanth Bhushan’s tweets an entirely voluntary act as there existed no court order directing the take-down. What it says about Justice Arun Mishra asking Twitter why it had not taken down the tweet is best left to the reader’s imagination.)

Proposed Amendments

As the intermediary’s “safe harbour” or shield against liability for hosting third party content is dependent on compliance with the Intermediary Guidelines, the content of these guidelines is incredibly important. As the Software Freedom Law Centre has reported (here), India’s new Intermediary Guidelines make continued immunity contingent on several problematic conditions, namely: (i) mandatory upload filters; (ii) traceability; (iii) a local incorporation requirement; and (iv) a twenty-four hour take-down requirement. These requirements are undeniably problematic, cumulatively restricting, and chilling speech. For example, an upload filter would force intermediaries themselves to judge the legality of content before it is published (flying directly in the face of the reasoning in Shreya Singhal). Even worse, upload filters shift the burden on the speaker to justify why their speech is not unlawful, rather than requiring a person offended by the speech or the government to justify why the speech should be taken down. This effectively makes restricting speech the norm and free speech an exception to that norm.

The proposed amendments to the Intermediary Guidelines warrant alarm bells being raised and interested readers should go through SFLC’s report. However, the focus of this post is to explain why the government’s focus on intermediary liability itself is misguided.  

The Bigger Picture

The renewed political impetus to regulate intermediaries is a result of the new dual role of large internet companies, particularly social media companies. As Jack Balkin notes, large social media companies not only make available speech for our consumption but also curate the speech that we consume. For example, not only does Twitter allows a user to see the speech of millions of other users, but by selectively ranking, editing, and removing content Twitter also determines what speech a user sees and does not see. This second role of curation cannot be performed without the intermediary (e.g. Twitter) having its own substantive view on what speech is good speech and what speech is bad speech.

Social media companies often argue that they are content neutral, or that speech is tailored based on a user’s own interests. However, this is simply not bourne out in practice. For example, when President Trump stated that vote-by-mail ballots were unsafe, Twitter ‘flagged’ it as potentially misleading, but Facebook carried the President’s statement as is. Simply put, Twitter’s substantive view on speech in the context of elections was different from Facebook’s. Therefore, despite granting intermediaries immunity, the reality is that large intermediaries voluntarily perform an editorial (or curating) function that determine what speech should be on their platform on what speech should not. These are often referred to as a platform’s “community guidelines”.

This voluntary curating function coupled with the massive market share of existing social media companies raises a significant issue. With the internet presently structured around just two or three social media companies, the probability that an individual citizen’s substantive view on good and bad speech will diverge from the social media company’s view on speech is extremely high. The most obvious manifestation is when a website takes down what a user may see as legal content, or alternatively refuses to take down what a user may see as illegal content. To be clear, it is not desirable to have content taken down merely because it is objectionable to another internet user (this is why Shreya Singhal imposed the court order requirement). However, when the user’s dissatisfaction with the social media site’s view of good and bad speech is examined in light of the user’s inability to choose another social media site or participate in the framing of the “community guidelines”, the curating role of social media companies absent any legal regulation becomes problematic.

Another way to look at this issue is that large social media companies have effectively created bottlenecks for speech on the internet, of which they are the sole unregulated gatekeepers. Gatekeeper functions are performed by most publishers for example, a magazine may refuse to publish an author’s article because of the magazine’s political views. However, the essential role played by social media companies in facilitating speech on the internet and the tiny number of companies involved creates a huge asymmetry of power between internet users and social media companies where an internet user cannot migrate to another social media platform in the same way an author can find another magazine to publish in. If a user wishes to participate in speech on the internet, they must subject themselves to the social media company’s views on speech in the form of arbitrarily enforced community guidelines. For example, the German Federal Court recently ruled that Facebook users were faced with a “false choice” between handing over huge amounts of private data to Facebook or not using the company’s ubiquitous social media service (here). In other words, internet users cannot ‘choose not to use Facebook’ because of its centrality to speech on the internet. The same dependence is also true of downstream companies and people who rely on social media companies for certain services (e.g. app developers for Apple’s App Store or YouTube’s content creators). This imbalance of power and the often arbitrary actions of intermediaries themselves has created the impetus for government’s to step in and seek to impose new rules that would make the voluntary editorial function carried out by intermediaries more acceptable to the citizen’s (or government’s) substantive view on speech.

Lastly, a user’s legal recourse against intermediaries is extremely limited. For example, in 2019 Twitter disabled senior lawyer Sanjay Hegde’s Twitter account over: (i) the use of August Landmesser’s photo defying the Nazi salute; and (ii) retweeting a poem by a CPI (Marxist-Leninist) politician – incidentally the original tweet was not taken down by Twitter. Hegde took Twitter to court alleging a violation of his free speech rights and a breach of Twitter’s own community guidelines. Twitter argued that as a private entity it was not obligated to guarantee Article 19(1)(a) rights. While there may exist a case for a contractual breach of the community guidelines, the episode highlights how even where internet users have the means and know-how to challenge an intermediary’s voluntary curating function, the law is ill suited to ensure recourse.  

Meaningful Regulation

Recall that intermediaries have always been soft targets for regulating speech online because they represent entities that the law can identify, regulate, and penalise in the otherwise decentralised world of the internet. India’s proposed new Intermediary Guidelines seek to make intermediaries even easier to identify and regulate (a local incorporation requirement) and opens intermediaries up to legal liability if their view of speech does not comport to the government-imposed norm (upload filters). The problem with this approach from a free speech perspective is that using legal liability as a threat to force intermediaries to take greater responsibility for online expression will likely lead to the systematic over-removal of legitimate speech. For example, Twitter did not wait for a court order to remove Prashant Bhushan’s tweets, as it was legally entitled to do under the Shreya Singhal ruling. Irrespective of whether an intermediary’s community guidelines are lax or strict, the spectre of legal liability forces intermediaries to be extremely cautious and remove speech that may not be unlawful. Worse, the high cost of upload filters and local incorporation requirements automatically privilege large intermediaries such Facebook and Google over smaller companies. Therefore, a regulatory approach focussed on intermediary liability not only fails to address the power imbalance between online intermediaries and their users, it further empowers existing intermediaries and incentivises them to be more aggressive in their voluntary curating function.

Understanding the problem created user-dependence on social media companies to speak on the internet, but also recognising that weakening “safe harbour” for intermediary immunities may not be a cogent response, government regulation must be more creative. “Breaking up big data” has become an increasingly common demand amongst certain politicians. Without going into the merits of a government mandated break-up of companies such as Facebook, Google, and Amazon, less drastic steps may be possible. It is also important to recognise that the harms created by large online intermediaries are not identical. For example, Facebook and Twitter may act as bottlenecks for free speech on the internet. Amazon has been accused of using its dual-role as a producer and a sales-platform to discriminate against sales-partners. Apple has been accused of discriminating against app-developers prior to apps can be listed on the App Store (the only way developers can supply their apps to users). Charges have been levied against Google for rigging its page-rank system to ensure that competitor services do not appear in Google’s ubiquitous search results. These diverse harms will likely require individuated solutions beyond a blanket breakup of large internet companies (previous breakups of large telecommunications and steel companies have resulted in re-consolidation within a decade or two).

A regulatory response must first be able to identify where speech may be being stifled. Recognising that users are unable to migrate to alternative social media networks even when an intermediary takes down their speech without a court order, an European Digital Rights (“ERD”) position paper explicitly recommends “bottleneck power” (the ability to preserve and lock-in a user-base) as a competition law metric that online platforms should be judged by (here). This can help regulators understand when users are locked in to online speech eco-systems, resulting in online intermediaries having too much power.

To break down this power, both ERD and Balkin advocate “interoperability” as a vital step that can restore significant power to internet users. A simple form of interoperability would allow users to access social media platforms from a variety of alternate services. For example, a user can access Twitter from a third-party app (not the Twitter app). This third-party app can display tweets purely chronologically, or use a different algorithm than Twitter, allowing the user to escape Twitter’s speech curating function to a limited extent (Twitter’s ranking of tweets) and choose a third-party app that the user believes to be the most beneficial.

A more robust form of interoperability would insist on a set of common internet protocols that allow users to directly communicate between different internet platforms (e.g. a Facebook user could directly message a Twitter user). This may sound unthinkable at present, but such common standards exist for email. An internet user is free to choose between a variety of email services but is ensured that they can still mail users on other email services. As ERD notes, if I migrate from Yahoo to Gmail, I do not automatically lose all my friends, followers, or contacts, thus the threshold to migrate is low and user-dependence and lock in is mitigated. By allowing users to migrate between different social media companies easily, social media companies are incentivised to provide better services and users are free to choose a social media company best reflects their substantive view of speech and are not beholden to any one service’s “community guidelines”. For example, if I found my speech constantly falling foul of Facebook “community guidelines”, I would migrate to social media X but still be able to reach my erstwhile “friends”. This would also apply in reverse, if I felt that Facebook was not censoring content enough and I wanted an even more curated feed, I would migrate to social media Y with stricter “community guidelines”. In the long term, this would ensure more social media companies and continued interoperability (today would you leave your email service for a new service that does not allow you to send emails to users with Gmail or Yahoo or Hotmail accounts?).

It is important to note that internet companies have systematically resisted moves towards such forms of interoperability. For example, Twitter limits the number of users a third-party Twitter app can host. Neither Twitter, Facebook, or YouTube provide meaningful application programming interfaces (APIs) that would allow for a service that collates your Facebook, Twitter, and YouTube feeds. Apple openly uses a “walled garden” approach to push sales of additional Apple-only compatible hardware.

Lastly, governments should look to set up specialised tribunals or regulators that improve recourse for internet users against the actions of intermediaries. Rather than a user having to approach regular courts to allege a contractual breach of community guidelines by the intermediary, specialised tribunals offering quick and meaningful dispute resolution will also incentivise better intermediary behaviour. The online nature of these disputes is also an opportunity to potentially examine online-only dispute settlement mechanisms such as virtual tribunals or Lok Adalats.   

Conclusion

This post stemmed from two excellent articles written by Jack Balkin (here) and Lina Khan and David Pozen (here). Balkin’s initial approach was to suggest imposing fiduciary obligations on intermediaries to ensure intermediaries do not act arbitrarily or like “con-men” with respect to user data. As Khan and Pozen note, an approach that centres around the regulation of intermediaries ignores the larger realities of the internet eco-system within which intermediaries operate today. Large internet companies already owe fiduciary obligations to stockholders to maximise value, which is often done by a business model reliant on the spread of divisive, inflammatory content and eroding user privacy. For example, the New York Times reported on an individual spreading political disinformation purely to capitalise on Google ad-revenue (here). When we recognise that these social media companies also form the cornerstone of modern public discourse, the magnitude of the problem is put into perspective. As Khan and Pozen conclude, the business model matters, as do economic realities.

A regulatory approach and response that focuses entirely on whether intermediaries should be held liable for third party content is unlikely to address the harms stemming from the extreme user dependence on large social media sites. Recognising the key role social media companies play in curating speech on the internet and the outsized market share these companies possess – there is bound to be a mismatch between a user’s substantive view of speech and those available on the internet resulting in the stifling of potentially lawful speech. Recognising that users are increasingly locked in to a handful of social media eco-systems, regulation of speech on the internet should work towards dismantling the gatekeeping power of large social media companies and putting power back in the hands of individual speakers to choose platforms of their choice and reclaim public discourse.


The author is grateful to Shweta Reddy from the Centre for Internet and Society for her inputs on this post.

Online Speech and Intermediary Liability: The Delhi High Court’s MySpace Judgment

On the 23rd of December, a division bench of the Delhi High Court handed down its judgment in MySpace vs Super Cassettes. In the process, the Court laid down some important legal principles regarding intermediary liability in the case of copyright claims, the scope of safe harbour provisions, and injunctions. The judgment, therefore, has significant ramifications for online speech, and ought to be studied closely. It is also worth reading because it is one of the first Indian judgments (to my knowledge) that focuses upon the special characteristics of the Internet from the point of view of expanding online speech, and not contracting it. As I shall argue, on the core question before it – that is, balancing online speech with the statutory mandate of protecting copyright – the Court successfully negotiated a minefield of legal and technical issues, and – apart from one finding – managed to tread a fine line between the competing interests.

Background

In 2008, Super Cassettes filed a suit against MySpace. It argued that a lot of the songs and other audio-visual content that MySpace hosted on its website violated its copyright. It further argued that MySpace profited from this through advertisements that were inserted into the videos. Furthermore, MySpace’s existing systems of protecting copyright – i.e., a terms of service agreement with its users directing them not to violate copyright, a notice-and-takedown system where MySpace would take down copyright infringing material on being notified by the holder, and a Rights Management Tool, whereby copyright holders could sign up, create digital fingerprints of its content, and then submit it to MySpace, who would then block mirror content – could not absolve it of liability under Indian law. Super Cassettes argued that MySpace had violated provisions of the Copyright Act. It sought a permanent injunction, and damages.

MySpace argued, on the other hand, that it was an intermediary, and had no role to play in making, modifying, or uploading content upon its website. The advertisements that were shown just before a video played were through an automated process, depending upon matches between keywords. Consequently, MySpace was not modifying any content in its own right, and was protected under the safe harbour provisions of the Information Technology Act. Furthermore, a general injunction to remove copyright-infringing material was impossible to comply with, because not only were there more than 100,000 works on the website which MySpace would have to sift through, but it would also have to monitor every future work that was uploaded on its website.

On hearings for interim orders, the single judge of the High Court found that MySpace was prima facie in breach of the Copyright Act by providing a “space” where infringing material was published, and with “knowledge” that such material was being published (the two requirements under Section 51(a)(ii) of the Copyright Act). The single judge further held that MySpace’s role went beyond that of an intermediary; and in any event, under the proviso to Section 81 of the Information Technology Act, the provisions of that Act (including the safe harbour provisions) were clearly subservient to copyright law. MySpace’s safeguards – such as the Rights Management Tools – were not relevant to a finding of liability for copyright infringement, and could only come into play as factors for mitigating damages. Consequently, the single judge granted a qua timet injunction (i.e., an injunction against future acts), prohibiting MySpace from allowing uploads of any material that breached Super Cassette’s copyright, and to delete from its website the songs and other content about which Super Cassettes provided it with the relevant details.

MySpace appealed.

Issues before the Division Bench

At the outset, it is important to note that the division bench of the High Court was concerned only with the issue of the injunction order – that is, the order of injunction – and not with the overall suit between the parties. Consequently, Justice S. Ravindra Bhat, writing the judgment of the Court, framed the following three legal issues for adjudication:

 “First whether MySpace could be said to have knowledge of infringement as to attract Section 51(a)(ii) and consequent liability; Second, does proviso to Section 81 override the “safe harbor” granted to intermediaries under Section 79 of the IT Act third, possibility of harmonious reading of Sections 79 and 81 of the IT Act and Section 51 of the Copyright Act.” (paragraph 31)

Knowledge of Infringement

Section 51(a)(ii) of the Copyright Act states that copyright shall be infringed when a person “permits for profit any place to be used for the communication of the work to the public where such communication constitutes an infringement of the copyright in the work, unless he was not aware and had no reasonable ground for believing that such communication to the public would be an infringement of copyright.”

The Division Bench agreed with the single judge that MySpace’s website was a (virtual) “place”, from which MySpace earned profit (paragraph 34). The key question was whether, in the absence of specific notice, MySpace could be said to be “aware” or have “reasonable grounds to believe” that copyright infringement was taking place on its website. The single judge had held that the very presence of safeguard provisions and tools (such as notice-and-take-down) in MySpace’s user agreements signalled a “general awareness” that copyright was being infringed on its website, and this was enough for liability to be attracted under S. 51(a)(ii). The Division Bench disagreed. In paragraph 35, Justice Bhat held:

“Simply put, that test [of general awareness] overlooks that unlike “real” space, in a virtual world, where millions of videos are uploaded daily, it is impossible under available technology standards to identify the streaming content, which actually infringes. Knowledge has a definite connotation, i.e a consciousness or awareness and not mere possibility or suspicion of something likely. The nature of the Internet media is such that the interpretation of knowledge cannot be the same as that used for a physical premise.”

He then went on to develop this argument in the succeeding paragraphs, noting that in the specific context of the Internet, where a system could store “millions” of videos, the concept of “constructive”, or assumed knowledge, from the offline world, could not simply be transplanted here. Rather:

“The requirement is to give specific information to the content host or the website(MySpace) that infringement occurs with respect to the specific work. A general or vague description of the works would be insufficient as this then leaves room for MySpace to rely guesswork as to what content has to be removed. Therefore, the onus is upon the plaintiff to give detailed description of its specific works, which are infringed to enable the web host to identify them.” (paragraph 36)

Super Cassettes argued, however, that it had notified to MySpace a list of more than 100,000 songs in which it had copyright. The Division Bench held that this was insufficient, especially because when MySpace had done a preliminary scan of these 100,000 songs, it had found that many of them had been uploaded by distributors or performers who might well have been covered under copyright exceptions (such as, perhaps, fair use). Consequently, the Division Bench held:

“It is only when a specific work is mentioned can it be said that MySpace possesses knowledge of a work being infringed on its website. Providing long lists of all works, tasking MySpace with identifying and removing infringing content is not enough. It is only when MySpace has specific or actual knowledge or when it has reasonable belief, based on information supplied by SCIL and if despite such knowledge or reasonable belief it fails to act can it be held liable for infringement…  in other words, an indiscriminate and blind acceptance of SCIL‟s entire list to run a general filter and “take down” all content would result in grave damage and result in likely multifarious disputes: with up-loaders, many of whom are original creators in their own right and might have used a miniscule quantum of the copyrighted content in… their larger original creation; with distributors, who might hold genuine licenses and with others who create versions, remixes or original titles which may have little content; still there may be other content uploaders whose material only superficially resembles with the titles owned by SCIL, because of the lyrics or titles but is otherwise genuine creation with its independent copyright.” (paragraph 38)

In the present case, therefore, the Court held that MySpace had no prima facie knowledge of infringement (paragraph 39).

The Interplay Between the Copyright Act and Intermediary Liability under the IT Act

This brought the Court to issues 2 and 3 (see above). MySpace had sought sanctuary under Section 79 of the Information Technology Act. Section 79 provides a safe harbour to intermediaries from being held liable for unlawful content on their servers. This safe harbour is lost if the intermediary receives “actual knowledge” that it is hosting unlawful content. And recall that in Shreya Singhal vs Union of India, the Supreme Court had held that “actual knowledge” under Section 79 was limited to either a court order, or a government directive.

As we can see, there is an overlap between Section 51 of the Copyright Act and Section 79 of the IT Act. Section 51(a)(ii) makes an entity liable for secondary infringement if it is “aware” or has “reasonable grounds for believing” that infringement is happening on its “place”. Section 79, which otherwise protects intermediaries, nonetheless revokes this protection if the intermediary has “actual knowledge” that its platform is being used for law-breaking (which includes copyright violation). Now, the key question is this: if the general standard of protection afforded to intermediaries under the IT Act (actual knowledge) – as interpreted in Shreya Singhal – is greater than the general protection in cases of secondary infringement under the Copyright Act (non-awareness), then which of the two laws will apply to the specific case of an intermediary whose website is being used for potential copyright infringement? Or, to put the question another way: if the intermediary does not have “actual knowledge” (as per Shreya Singhal) under S. 79 – that is, if it has not been notified by a court order or by the government that copyright infringement is taking place – can it nonetheless be held liable under Section 51 if it has the “awareness” of copyright infringement (as held by the Division Bench in Issue 1) taking place?

It is here that I part ways with the Division Bench. Super Cassettes argued that the IT Act was not applicable at all to cases of copyright infringement, since the proviso to Section 81 of that Act stated that “nothing contained in this Act shall restrict any person from exercising any right conferred under the Copyright Act.” The Division Bench – in my view, correctly – rejected this argument. It pointed to the language of Section 79 itself, which itself had an overriding clause: “Notwithstanding anything contained in any law for the time being in force but subject to the provisions of sub-sections (2) and (3)…” – which, according to the Division Bench, meant “that the only restriction to be placed in the application of Section 79(1) is contained within the section: Section 79 (2) and Section 79 (3).” Consequently:

“In this Court‟s opinion, Section 79 grants a measured privilege to an intermediary. However, that would not mean that the rights guaranteed under the Copyright Act are in any manner curtailed. All Section 79 does is regulates the liability in respect of intermediaries while the Copyright Act grants and controls rights of a copyright owner. Under the circumstances, it is difficult to conceive how one would pose a barrier in the applicability of the other. The true intent of Section 79 is to ensure that in terms of globally accepted standards of intermediary liabilities and to further digital trade and economy, an intermediary is granted certain protections. Section 79 is neither an enforcement provision nor does it list out any penal consequences for non-compliance. It sets up a scheme where intermediaries have to follow certain minimum standards to avoid liability; it provides for an affirmative defence and not a blanket immunity from liability.” (paragraph 47)

In the succeeding paragraphs, the Court stressed on the fact that “Parliament by amending the IT Act intended to create a separate provision and regime for intermediaries…”, and that “given the supplementary nature of the provisions- one where infringement is defined and traditional copyrights are guaranteed and the other where digital economy and newer technologies have been kept in mind, the only logical and harmonious manner to interpret the law would be to read them together.” So far, so good. However, in paragraph 50, the Court then held:

“Section 79(3) of the IT Act specifies that when a person has actual knowledge or upon notification by the appropriate government or its authority fails to expeditiously remove or disable access to an unlawful content then the immunity granted under Section 79(1) is revoked. The knowledge contemplated under this section is actual knowledge. In Shreya Singhal (supra), Section 79(3) with Rule 3(4) of the Rules were read down to mean receipt of actual knowledge from a court order or on being notified by the appropriate government. However, this was in respect of restrictions under Article 19(2) of the Constitution of India. The Supreme Court was conscious of the fact that if millions of requests for take down are made, it would become difficult for intermediaries (such as Google) to identify legitimate requests. In the case of copyright laws it is sufficient that MySpace receives specific knowledge of the infringing works in the format provided for in its website from the content owner without the necessity of a court order.”

However, instead of reading Section 51 of the Copyright Act and Section 79 of the IT Act harmoniously, this effectively subordinates the latter to the former. In the first part of its judgment, the Court undertaken an independent analysis of Section 51, and arrived at a “knowledge” standard for intermediaries. This standard – of specific notice – was considerably less protective than Section 79 IT Act’s requirement of “actual knowledge” through a court order or the government. In paragraph 50, the Court simply adopted the specific notice standard “in the case of copyright laws.” In other words, intermediaries who were otherwise protected under Section 79 of the IT Act would lose their safe harbour if they were given specific notice of copyright infringement by the copyright owner, even in the absence of a court order or a government directive.

The Court justified this by pointing to the “red flag” requirements under the American DMCA, which is another form of notice-and-take-down. That, however, cannot be an answer to why the actual notice standard laid down in Shreya Singhal would be diluted to specific notice in case of copyright claims. And the broader problem is this: even under the Division Bench’s heightened requirement of “specific notice”, the primary onus of deciding upon the validity of a copyright claim, and deciding whether or not to take down content (thus restricting online speech) rests upon a private party – MySpace. The problems with this approach have been well-documented: facing the threat of losing their safe harbour and being sued for damages, private parties will err on the side of taking down content. MySpace, in all likelihood, will have neither the resources to parse Super Cassette’s notice to see whether the infringing content is saved by fair use, or by de minimis use, or any other exception to copyright infringement – and nor will it be willing to take the risk of affirmatively rejecting a notice, and then facing the possibility of extended litigation. It was precisely because of this that the Supreme Court in Shreya Singhal restricted “actual knowledge” to a court order or a government notification. In my view, the Division Bench unjustifiably dilutes that standard for copyright claims.

Furthermore, the matter is somewhat complicated by the operative part of the judgment. In paragraph 68(b), in its record of its findings, the Court notes:

“Section 51(a)(ii), in the case of internet intermediaries contemplates actual knowledge and not general awareness. Additionally, to impose liability on an intermediary, conditions under Section 79 of the IT Act have to be fulfilled.”

However, this seems to run counter to what came before: under this, Shreya Singhal’s actual knowledge standard under Section 79 would apply over and above Section 51’s actual knowledge standard that the Division Bench laid down while discussing Issue 1.

Here is one possible reconciliation: since MySpace was already following a notice-and-take-down rule, the Division Bench’s discussion on specific notice should be limited to finding that what MySpace was doing was consistent with law. The Division Bench did not say that a specific notice would be sufficient to erase safe harbour as a general proposition of law. Rather, in paragraph 68(b), it made clear that the general proposition of law was that intermediaries would continue to be protected even in cases of copyright infringement, unless Shreya Singhal’s standard of actual knowledge was satisfied.

This would be a far more speech-protective reading of the judgment. However, it is a difficult one to sustain, in light of the clear observations of the Court, which we discussed above. This is one issue, then, on which the judgment’s import is – in my view – slightly unclear.

Qua Timet Injunctions

Lastly – and very importantly – the Division Bench agreed with MySpace’s contention that the qua timet injunction was virtually impossible to enforce. Justice Bhat made the crucial observation that:

“A further balancing act is required which is that of freedom of speech and privatized censorship. If an intermediary is tasked with the responsibility of identifying infringing content from non-infringing one, it could have a chilling effect on free speech; an unspecified or incomplete list may do that. In an order of relief such as that passed by the learned Single Judge, MySpace would be in contempt of court for not complying with an order, which is otherwise impossible or at best onerous and cumbersome of performance. In order to avoid contempt action, an intermediary would remove all such content, which even remotely resembles that of the content owner. Such kind of unwarranted private censorship would go beyond the ethos of established free speech regimes.”

(Of course, as I argued above, it is precisely this privatised censorship which is a problem with the specific notice standard that the Court did endorse).

The Court also noted that if MySpace was forced to remove content in accordance with the vague terms of the qua timet injunction, it might even end up being in breach of its contract with its users, in cases where lawful content ended up being taken down. Consequently, the Court finished by holding:

“Without a notice containing the details and location of the exact works in which infringement is complained of, MySpace cannot be expected to scan through such large number of videos to discern infringement. This is not only impractical but also dangerous for reasons discussed previously. A vague order of injunction against works which are yet to exist is not only contrary to law but also impossible to monitor. Therefore, SCIL cannot give vague and general lists of its works but will have to give notice with specific details as well as locations of the works, which the appellant shall remove within 36 hours of receiving such notice.” (paragraph 66)

This is very important, in particular, because it bucks the trend of the High Courts granting qua timet injunctions (accompanied by John Doe orders) almost as a matter of course.

Conclusion

MySpace vs Super Cassettes is a landmark judgment that contributes in no small measure to an evolving international jurisprudence on online free speech and intermediary liability. The High Court pays close attention to the characteristics of the internet, and is very solicitous to how its holdings will affect the flow of information on the internet. Unlike other judgments, where the fluid and flexible nature of the internet has been used to justify greater regulation, the Delhi High Court, in MySpace, is concerned to ensure that this fluidity and flexibility is not stifled by legal concepts developed for the offline world. While its exact finding on the interplay between the Copyright Act and the IT Act is a little unclear, the judgment breaks new ground in its analysis of actual knowledge (for intermediaries) under the Copyright Act, its refusal of a qua timet injunction, and its finding that the Copyright Act will not exclude the safe harbour provisions of the IT Act. It is a judgment that should be studied and discussed closely.