The last two posts examined prominent new features of the Intermediary Guidelines 2021, automated filtering (here), and tracing first originators (here). However, the undoubted goal of the Guidelines, demonstrated by the express regulation of “Significant Social Media Intermediaries”, is to hold large social media companies accountable for their role in structuring online speech. This post examines the scheme of the Guidelines to examine how effectively they regulate social media.
I begin by examining the liability for communicating unlawful speech to understand why intermediaries are granted immunity for hosting unlawful content. Next, I explain the power imbalance between tech companies and internet users. Finally, I analyze the Intermediary Guidelines’ attempt to remedy this power imbalance by providing various reliefs to users vis-a-vis online platforms.
Speech models and their inadequacy for online platforms
In determining liability for communicative acts, the law distinguishes between publishers and distributors. Publishers are liable for the content they publish, while distributors are typically not liable for content they distribute because they are not deemed to know if they are distributing unlawful content. For example, a newspaper may be liable for an article it solicited/sourced, compiled and edited but a newsstand selling the newspaper will not be held liable because it could not be expected to know the contents of every article in every newspaper it sells. (Volumes could be written on the myriad of nuances within these concepts but acknowledging the distinction will suffice for now.)
However, this publisher-distributor distinction breaks down in the case of online platforms. They are not neutral distributors as they actively moderate and rank content to determine what users see. But they are also not quite publishers, as they are not responsible for creating the content on their platforms. Unlike a newspaper, whose employees write the articles, on platforms, it is internet users not a platform’s employees that create content. Thus, the publisher-distributor distinction serves legislators poorly when deciding how to regulate online platforms.
Further, legislators worry that if online intermediaries are held liable for unlawful content on their platforms (like a publisher), intermediaries would either: (1) adapt their behaviour to fall under the distributor model and exercise no control over their platforms, leading to the internet being filled with the worst kinds of content; or (2) follow the publisher route and takedown any content that was remotely unlawful, leading to vast horizontal censorship by online intermediaries and defeating the purpose of the decentralised, user driven internet. Enter intermediary immunity.
Intermediary immunity and voluntary moderation
The approach taken by several legislatures, including India’s, has been to provide online intermediaries legal immunity for unlawful content on their platforms even though they filter and curate content. This approach ensures two very important goals: (1) online intermediaries can voluntarily filter content without worrying about a publisher’s liability, thus resulting in less abusive speech online for users; and (2) intermediaries have enough breathing space to not take down user content en masse, resulting in the protection of users’ free speech against horizontal censorship. Rule 3(1)(d) (third proviso) of the Intermediary Guidelines 2021 expressly notes that intermediaries will not lose their status as neutral entities even thoughthey may voluntarily remove a wide range of presumptively illegal content from their platforms.
But this is where things start to get tricky. The definition of ‘presumptively illegal speech’ set out in Rule 3(1)(b) of the Intermediary Guidelines is so broad that platforms have immense discretion in what content to take down within the scope of the rule itself. Under Rule 3(1)(b) intermediaries may takedown content that (inter alia):
“(ii) is defamatory, obscene, pornographic, paedophilic, invasive of another‘s privacy, including bodily privacy, insulting or harassing on the basis of gender, libellous, racially or ethnically objectionable, relating or encouraging money laundering or gambling, or otherwise inconsistent with or contrary to the laws in force […]
(viii) threatens the unity, integrity, defence, security or sovereignty of India, friendly relations with foreign States, or public order, or causes incitement to the commission of any cognisable offence or prevents investigation of any offence or is insulting other nation […]
(x) is patently false and untrue, and is written or published in any form, with the intent to mislead or harass a person, entity or agency for financial gain or to cause any injury to any person”
These are obviously very broad terms and intermediaries may reasonably disagree with their users over whether content falls within these categories. This creates a conflict between the two goals of intermediary immunity. On the one hand immunity is essential to incentivising platforms to freely take down the worst forms of content without risk of liability, but the wide discretion Rule 3(1)(b) gives them results in users eventually complaining of horizontal censorship (as has already happened on both sides of the political spectrum).
(Note: in Shreya Singhal the Supreme Court ruled that the contents of Rule 3(1)(b) should be limited to the restrictions contained in Art. 19(2) of the Constitution. This may be both normatively undesirable and impractical, as there are several categories of content that fall outside of Art. 19(2) that we may want platforms to voluntarily restrict, e.g., copyright and trademark infringing content, sexually explicit content not rising to the ‘public morality’ threshold, fraudulent content causing consumer harm etc.)
Thus, legislation/regulation must balance the protections granted to intermediaries with the risk of horizontal censorship. This is trickier than it sounds, as the underlying interests of platforms (maximising advertising revenue by the collection and sale of personal data) and users (free speech and privacy) can diverge significantly.
The problem magnified manyfold
As I discussed in much greater detail (here), the problem of horizontal censorship is multiplied severalfold because of two crucial factors. First, large online social media platforms are central to public discourse. Second, because there are only a handful of these platforms, simply leaving a platform (or worse being suspended) can severely curtail a user’s participation in public discourse. As Kate Klonick poignantly notes, ‘we are dependent on these private platforms to exercise out public rights.’
The true extent of how vulnerable citizens are is demonstrated at several layers. First, as moderation is conducted by private entities, it is undemocratic. While platforms may respond to public outcries, this only ensures that the already marginalised are disproportionately impacted with little hope of recourse. Next, platforms do not disclose their moderation policies. While ‘Terms of Service’ and ‘Community Guidelines’ are publicly available, the rules and procedures platforms use to take down content constantly change and are rarely disclosed. For example, Facebook ‘accidentally’ blocked the hashtag ‘ResignModi’, later reinstating the content with no explanation as to the procedure employed either in the take down or reinstatement. Finally, the stranglehold online platforms have over public discourse may be leveraged by democratic governments to entrench their power by silencing criticism.
Balancing the scales
Just as gender and caste have historically constituted centres of private power, modern social media represents a new site where citizens are dominated. Regulation must seek to remedy this power imbalance but also ensure that the other interest behind intermediary immunity (incentivising voluntary moderation by not saddling platforms with a ‘publisher’s liability) is not annihilated. The Intermediary Guidelines leaves the basic structure of intermediary immunity untouched (except for several additional compliance requirements by social media companies geared towards expanding investigative and enforcement powers) but attempts to remedy the power imbalance by granting users several reliefs vis-à-vis online intermediaries.
Just as platforms have the discretion to both take down and not take downspeech, internet users may also seek both outcomes. A user may be aggrieved by content and want an intermediary to take down content (the “Take Down Relief”). Alternatively, a user may be aggrieved by the decision of a platform to take down content, and want it reinstated (the “Keep-Up Relief”).
The Take Down Relief
One part of the Take Down Relief involves citizens approaching courts seeking orders (typically in the form of an injunction) against content being displayed online. The Supreme Court in Shreya Singhal ruled that it was impermissible to require online intermediaries to take down content merely because someone complained against a piece of content – a judicial order was necessary (¶117). The decision is in line with the above stated twin goals of intermediary immunity. By protecting platforms from liability until they received a court order, intermediaries could continue to voluntarily remove the worst forms of content from their sites. Crucially, by protecting platforms from being sued for content on their sites until a court ruled the content was (at least prima facie) unlawful, intermediaries would not take down content en masse, thus avoiding horizontal censorship. Courts would weed out frivolous complaints and direct intermediaries to take down content after balancing all the interests involved.
Thus, users do have a Take Down Relief against illegal content in the form of courts and the Intermediary Guidelines 2021 do not interfere with this relief (in fact they expressly recognise it). However, this is by no means a perfect relief, given the time and costs involved in getting a court order. Further, Rule 3(1)(b) is so broad that courts have little guidance (and a lot of discretion) when deciding to take down online content. For example, the Delhi High Court noted that some factors that should be considered prior to take down are (i) the comparative importance of the rights at issue; (ii) the availability of less restrictive measures; (iii) the costs associated with implementing the measures; and (iv) the efficacy of the measures implemented by the ISP. However, another bench of the High Court merely noted that the plaintiffs had demonstrated a prima facie case of defamation and directed the intermediary to block the content. Yet another bench directed search engines to de-index content and permitted a plaintiff to directly approach Internet Service Providers (ISPs) to block additional content uploaded after the original take down order by the court, allowing ISPs to challenge the take down if they believed the plaintiff has exceeded the terms of the original order (which they have no incentive to do).
Rolling back Shreya Singhal
Rule 3(2) of the Intermediary Guidelines 2021 also introduces a new Take Down Relief. A user may launch a complaint where content “exposes the private area of such individual, shows such individual in full or partial nudity or shows or depicts such individual in any sexual act or conduct, or is in the nature of impersonation…” The intermediary is obligated to take down such content within 24 hours of receiving a complaint. On the one hand, the Rule empowers users vis-à-vis social media platforms, and a cogent case may be made that where a user complains over explicit/intimate images of themselves uploaded against their consent, and no other interests are involved, users should not be required to go to court. (Note, they may not even haveto go to court if the intermediary agrees and takes down content once flagged, but that approach puts the relief at the sole discretion of the platform.)
On the other hand, requiring intermediaries to take down content at the behest of a private user was an approach expressly rejected by Shreya Singhal to protect against platforms being swamped by complaints and taking down all content that any other user disagreed with. The simple truth is that hard cases exist. For example, the content complained against may depict two people, one who wants the content to stay online and one who wants it taken down. Now imagine the content is also a matter of public interest. These delicate situations where rights need to be balanced are exactly why Shreya Singhal insisted on court oversight to guard against overzealous complainants and over-cautious platforms.
Efficacy probably lies somewhere between Rule 3(2) and Shreya Singhal and the Guidelines could have: (i) distinguished between cases where interests need to be balanced and where they do not be; (ii) set up an independent body/fast track process to hear take down claims; or (iii) narrowed judicial discretion on when content should be taken down. However, the Guidelines ultimately fails to either improve the court order system of Shreya Singhal but also creates a blanket workaround that may lead to content being taken down at the sole behest of a disgruntled user in delicate cases where rights ought to have been balanced by a court.
The Keep-Up Relief
Users may also have a grievance when platforms take down content that users believe should have stayed up (e.g., see the suspension of Sanjay Hegde’s Twitter account over: (i) the use of August Landmesser’s photo defying the Nazi salute; and (ii) retweeting a poem by a CPI (Marxist-Leninist) politician). As discussed above, while users can review ‘Terms of Service’ and ‘Community Guidelines’, the actual rules and procedures employed to take down content are not transparent.
Prior to the Intermediary Guidelines 2021, a user could pursue a contractual remedy, arguing that the speech did not violate the platform’s ‘Terms of Service’. Such remedies are unlikely to succeed as contracts are typically skewed in favour of platforms and confer broad discretion to take down content. For example, Facebook’s Terms of Service state that it can “remove or restrict access to your content, services or information if we determine that doing so is reasonable necessary to avoid or mitigate adverse legal or regulatory impacts to Facebook.”
Hedge is presently pursuing a public law remedy under Article 226, arguing that Twitter performs a social/public function and should thus be required to respect his free speech (Art. 19) and non-discrimination (Art. 14) rights. Without going into the merits of this argument, such constitutional gymnastics could be avoided by legislation/regulation that directly gives users recourse against social media platforms. Enter the Intermediary Guidelines 2021.
Before we begin, it should be noted that the Intermediary Guidelines 2021 risks substantially aggravating the problem of intermediary take down itself, by incentivising/mandating social media platforms to employ automated filtering technologies. The imprecision of such technologies is likely to cause more users than ever to seek out a Keep-Up Relief. (See my previous post on the subject here.)
Rule 4(8) of the Guidelines attempts to provide such a Keep-Up Relief. The Rule states that where a Significant Social Media Intermediary voluntarily disables content, the platform must, prior to the disabling of content, provide the creator/poster of the content a “notification explaining the action being taken and the grounds or reasons for such action”. Next, the platform must also provide the user with a “reasonable opportunity” to dispute the platform’s decision and request the reinstatement of the content. This creates a regime where intermediaries must provide reasoned notice and meaningful review to internet users when content is taken down.
On the face of it, this is a step in the right direction as it forces social media platforms to employ some minimum rationality and allows users a chance to contest take down decisions. But how well will this structure of accountability hold up when confronted with hard cases (which it undoubtedly will be)? First, the obligation merely requires social media platforms to provide an opportunity for users to dispute the decision and does not improve on the standards of content evaluation to be applied by them – recall that Rule 3(1)(b) is broad and does not meaningfully restrict the platform’s discretion in decision making. Judges themselves struggle to impose uniform standards on when content should be taken down. While platforms may seek to ensure a measure of uniformity, unlike judicial decisions, the reasoning by platforms under Rule 4(8) is not required to be publicly available and no system of precedent applies – creating a risk of opaque and haphazard decisions.
The Achilles heel of Rule 4(8)
It is also important to understand that the Intermediary Guidelines 2021 regulate the conditions for intermediary immunity, i.e., the conditions they must satisfy to not be treated as a publisher and sued for hosting unlawful content. Now, even if an intermediary breaches the notice and hearing obligation under Rule 4(8), the consequence is a loss of this legal immunity for hosting unlawful content. But the intermediary is not hosting the content (indeed the whole dispute is because the intermediary is refusing to host the content), so there is no risk for the intermediary to losing this legal immunity. Simply put, intermediaries cannot be sued for content they are not hosting, so there is no legal risk associated with arbitrary voluntary take downs. This highlights the folly of trying to offer users recourse to moderation decisions through conditions of intermediary immunity, which is what the Intermediary Guidelines are.
Lastly, Rule 4(8) does not open the door for users to go to more legitimate body and argue for content to be reinstated. In fact, if courts were to intervene under Rule 4(8), this would undermine the structure of Section 79 and the Intermediary Guidelines, which are premised on treating intermediaries as neutral even though they voluntarily take down content. In the case of Take Down Reliefs, where Person A is aggrieved by content posted by Person B and asks the Court to direct Intermediary X to stop publishing the content, a Court can provide this relief while still upholding the legal fiction of intermediary neutrality (i.e. accepting Intermediary X’s decision to not take down the content as it is neutral like a newsstand). Section 79, the Guidelines, and Shreya Singhal are premised on not examining the correctness of Intermediary X’s decision to keep the content up.
However, where a platform has taken down content and the user seeks a Keep-Up Relief, Person A would sue Intermediary X for its decision to take down Person A’s post. A court cannot logically grant relief without: (i) acknowledging that Intermediary X is not neutral, in which case it really is more like a newspaper and should be liable for its moderation decision; and (ii) going against the protection for voluntary moderation expressly provided in the Intermediary Guidelines; and/or (iii) engaging in the sort of constitutional and interpretative gymnastics that the Sanjay Hedge case involves. All this leaves the efficacy of the Rule 4(8) Keep-Up Relief at the mercy of large social media companies, and their internal calculus as to whether offering users such recourse will garner them goodwill (see Facebook’s Oversight Board).
The problem the government sought to address with the introduction of the Intermediary Guidelines is a real one. Large social media companies have unbridled power over what users see online at a time when we increasingly live our lives on the internet. This problem cannot be solved unless regulation accepts two significant realities, (i) that intermediary immunity remains central to incentivising intermediaries to moderate content and protecting users against horizontal censorship; and (ii) the current moderation procedures employed by platforms lack transparency, accountability, legal certainty, and disproportionately impact marginalised communities.
By relying solely on the structures of intermediary immunity, the Intermediary Guidelines demonstrates a singular lack of creativity when it comes to regulating online platforms. One alternative approach would be a separate statute with certain minimum standards of conduct for intermediaries (no easy task, but at least one that begins with an acceptance of the problems that need to be solved). As a result, the new Guidelines not only fail to provide users with efficacious remedies to either take down or keep up online content, but it perpetuates the status quo where online platforms are free to exercise arbitrary and dominating power over Indian citizens – with the only meaningful regulatory intervention occurring through government blocking of content when platforms fail to toe the government line.
The author is grateful to Praharsh Johorey for his comments on the draft of this post.