In June 2022 the Ministry of Electronics & Information Technology unveiled proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“Intermediary Guidelines”). The amendments propose the creation of one or more Grievance Appellate Committees (“GAC”) to hear appeals against platform decisions to remove or keep-up disputed content. Users can approach the GAC in two situations: (i) if the user complains against content and the platform fails to remove it; and (ii) if the platform voluntarily takes down a user’s content or suspends a user’s account, and the user wishes to reinstate their content or account. The GAC can direct the platform to either remove or reinstate content respectively.
The amendments raise several questions, beginning with whether the creation of the GAC is within the rule-making powers bestowed on the Government under the IT Act. However, this post analyses the power of the GAC to reinstate content that a platform has voluntarily taken down pursuant to its own Terms of Service (“ToS”) (or ‘Community Guidelines or Standards’). Imagine a situation where a platform removes a politician’s post for violating the platform’s ToS on misinformation or hate speech, but the GAC directs the platform to reinstate such speech. In such a situation, the GAC decision would compel the platform to host speech that violates its own ToS. This post argues that under current Article 19(1)(a) doctrine, compelling platforms to host speech contrary to their ToS would violate the platform’s own free speech rights unless the reinstatement furthers the informed decision-making necessary for self-governance. Thus, the GAC’s role should be limited to analysing whether the platform’s decision to remove the content was consistent with the platform’s own ToS.
At the outset, it is important to recognise a few key points. First, a platform’s ToS forms a contractual relationship between user and platform. In other words, a platform’s ToS sets out the kinds of content the platform will allow a user to post on its site – these are often branded as Community Guidelines or Standards. Second, ToS’ are often much broader than actual laws, and platforms may take down legal speech because they believe that such speech would lower the quality of a user’s experience (often referred to as ‘Lawful but Awful’ content). Thus, when enforcing their ToS’, platforms may remove that constitutes misinformation, spam, or nudity even if it doesn’t violate any laws. Third, a user who was aggrieved with a platform’s decision to remove their content could go to court and make a contractual claim that that the user’s content did not actually violate the platform’s ToS. But ToS’ typically grant platform’s broad discretion in the kinds of content they can remove (and users have agreed to this contract); thus it is practically impossible to sue for reinstatement of content. This may also explain the government’s motivation in introducing the GAC, to provide users with a method of getting content reinstated even after it is taken down by a platform.
Compelled speech and free speech
Free speech doctrine has long protected against government interference in the editorial discretion of organisations (e.g., newspapers) to decide what content to publish and what content not to publish. This protects the autonomy interest of the organisation by allowing them to control their message – ensuring the government cannot compel an organisation to publish content inconsistent with their beliefs. It also protects democratic self-governance by ensuring the government does not distort public discourse by compelling various organisations to carry biased pro-government content. However, as with all such freedoms, the right against compelled speech is not absolute.
In Union v Motion Picture Association, the Supreme Court analysed the validity of various statutes which compelled private cinema operators to screen scientific and educational movies. Prima facie this interfered with the editorial discretion of cinema operators to decide what movies to screen. The Court upheld the statutes, but laid down the following test for when it was permissible for the Government to compel organisations to carry speech:
Whether compelled speech will or will not amount to a violation of the freedom of speech and expression, will depend on the nature of a “must carry” provision. If a “must carry” provision furthers informed decision-making which is the essence of the right to free speech and expression, it will not amount to any violation of the fundamental right of speech and expression. If, however, such a provision compels a person to carry out propaganda or project a partisan or distorted point of view, contrary to his wish, it may amount to a restraint on his freedom of speech and expression.
The test laid down by the Court is clear. Where compelled speech furthers the democratic self-governance interest behind Article 19(1)(a), compelled speech may be a permitted interference on editorial discretion. Crucially, the government could not compel cinemas to screen propaganda or biased or false information. The Court also examined the reasonableness of the restriction, specifically noting that the statutes clearly prescribed the types of movies to be screened (scientific and educational) and the ratio of compelled movies to freely chosen movies (1 to 5). Thus, the restriction was narrowly tailored towards achieving the government’s legitimate aim of furthering informed decision-making and did not grant the government unrestricted power to prescribe content.
(As an aside, this is also broadly the approach adopted by the United States Supreme Court and interested readers may refer to Miami Herald Publishing Company v Tornillo and Turner Broadcasting v FCC. The former invalidated a requirement that newspapers publish replies to editorials that the newspaper disagreed with, and the latter upheld a content-neutral “must carry” provision on cable operators on the grounds that ensuring more viewers had access to minimum number of broadcast stations furthered an informed public.)
Interfering with social media’s editorial discretion
Social media platforms exercise editorial discretion when they remove content pursuant to their ToS. Different social media companies have their own ToS’ and they are free to curate speech on their platform based on what they believe will best serve their users. For example, when former President Trump posted misinformation about election ballots, some social media platforms chose to add fact-checking labels and even take down content, while others carried Trump’s posts as is. In other words, platforms decide what content appears on their sites and what content does not, and these decisions reflect the platform’s view on what speech has value to its users – like a newspaper exercising editorial discretion on what content to print.
When platforms have been accused of favouring certain categories of content, they have been quick to take the defence that they are ‘neutral’ and do not interfere with content. Even in court, they often take the defence that they are neutral intermediaries who cannot be asked to interfere with content or determine its legality. While politically expedient, this claim is also influenced by how Indian law regulates platforms. Section 79(2) of the Information Technology Act, 2000 states that platforms must not interfere with content if they wish to retain legal immunity for hosting unlawful content (colloquially referred to as safe harbour). The principle of safe harbour recognises that while platforms may exercise editorial discretion, the nature of this discretion is different given the high volumes of user uploaded content they host – and platforms need to be protected from the liability of any unlawful content that one of their millions of users may upload. Without safe harbour, platforms could be sued for any unlawful content they may be hosting and thus platforms aim to comply with Section 79(2).
This may suggest that platforms are indeed neutral and do not interfere with content (or exercise editorial discretion). However, Rule 3(1)(d) of the Intermediary Guidelines 2021 expressly notes that where platforms remove content voluntarily, they are not in violation of Section 79(2). In other words, Indian law expressly recognises that platforms remove content voluntarily based on their ToS – and thus exercise what amounts to editorial discretion in determining what content stays up and what content is taken down.
The GAC’s interference with editorial discretion
In the event a platform voluntarily removed content pursuant to their ToS, under the proposed amendments, a user could approach the GAC to have it reinstated. An order by the GAC to reinstate such content contrary to the platform’s ToS would be compelled speech and a direct interference in the platform’s freedom to decide what content to host and what content not to host. Under the principle set out in Union v Motion Picture Association, such an interference would be impermissible unless the content being reinstated furthered a democratic self-governance interest. The Court’s test would not permit compelling a platform to host propaganda or biased or false information.
It is also important to analyse the scope of the power granted to the government. The proposed amendments do not specify the types of content the GAC may direct to be reinstated (unlike the ‘scientific and educational’ films) nor do they specify the volume of content the GAC can direct to be reinstated. Thus, the current framing of the GAC confers unrestricted powers of interference with a platform’s freedom of speech and may not form a ‘reasonable’ restriction on free speech. This concern is exacerbated by the fact that: (i) the Union Government is responsible for appointing members to the GAC but the Union Government’s instrumentalities, or affiliates may also be parties before the GAC; and (ii) the independence and impartiality of the GAC Chairperson or Members are not guaranteed through traditional safeguards such as a transparent selection procedure, minimum qualifications, and security of tenure and salary.
A couple of important caveats must be made about the above argument. First, the proposed amendments are yet to be formally adopted and may undergo changes. Second, most major social media platforms are foreign actors, and the extent to which Article 19(1)(a) rights may be invoked by them remains contested. For example, the Union Government recently argued that Facebook and WhatsApp could not challenge Indian laws that allegedly violated their users privacy because they were foreign companies. Third, platforms may strategically choose to avoid expressly invoking their free speech rights to edit and curate content as it shines a light on how they structure online speech.
Lastly, the Union Government is trying to solve a real problem. Online platforms have often acted arbitrarily in taking down content (see here and here) and providing users some redress against unreasoned takedowns may be well-intentioned. However, any such mechanism must also respect the free speech rights of platforms to not carry speech that they determine violate their ToS. The newly adopted Digital Services Act in Europe for example, allows users to appeal to a dispute settlement body on the ground that the platform violated its own ToS when taking down the user’s content – i.e., the platform acted arbitrarily. If the GAC’s role was limited to ensuring that platforms are enforcing their own ToS’ fairly, by examining whether content removals are in accordance with the relevant platform’s ToS (rather than determining whether the content is legal or illegal), the GAC may provide users with recourse against platform’s that otherwise exercise extraordinary powers over online speech. Nonetheless, the GAC would still have to possess sufficient safeguards to ensure independence, impartiality, and fair procedures. In its current iteration, it risks being a tool for the State to compel platforms to host speech they may have otherwise taken down.