Intermediary Guidelines and the Digital Public Sphere: Automated Filtering

In a previous post (here), I had argued that focusing on intermediary liability to regulate social media companies would likely weaken free speech and further entrench the power of large technology companies. However, with the adoption of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“Intermediary Guidelines”), the government has adopted a liability centred approach – so let us take stock.

As the name suggests, the Intermediary Guidelines traverse well beyond the regulation of online intermediaries and raise several issues as discussed by the IFF (here and here) and the SFLC (here). The Guidelines are currently subject to a comprehensive legal challenge before the High Court of Kerala. At their core, the Guidelines change the obligations imposed on online intermediaries in ways that may fundamentally alter the public sphere. This post focusses on the obligation of social media companies to use ‘automated filtering mechanisms’ to weed out ‘illegal’ content.      

The Context

Content on the internet typically concerns three parties: (1) ‘content originators’ who post or share content; (2) online intermediaries who provide a platform for users to share content on; and (3) ‘content recipients’ who view the content posted by content originators. Conceptually, it is important to distinguish between websites that post their own content (a news organisations website), and online intermediaries that merely provide a platform (WordPress, which allows bloggers and readers to interact but has nothing to do with the content of individual blogs). When (inevitably) the material posted by a content originator violates the law, it raises the question of whether the online intermediary should be held secondarily liable for its role in spreading the illegal content.

Section 79(1) of the IT Act states that an online intermediary shall not be liable for this facilitative role. This ensures that intermediaries do not have to scrutinise each user post for potential illegality/liability that may be foisted on them, ensuring the free flow of user content on platforms such as Facebook and Twitter. However, given the wide reach of the internet, governments also typically want intermediaries to act as gatekeepers against some of the worst forms of content. As a result, in India, the immunity offered to intermediaries by Section 79(1) requires intermediaries to satisfy certain obligations. Crucially, under Section 79(2)(c), online intermediary must “observe due diligence”. This is where the Intermediary Guidelines become relevant, as Rules 3 and 4 of the Guidelines prescribe the ‘due diligence’ obligations of online intermediaries. In other words, if intermediaries want to avail of the legal immunity for any illegal content on their platforms, they must satisfy the conditions of Rule 3 and 4 of the Intermediary Guidelines.   

Scheme of the Guidelines

Rules 3 and 4 of the Intermediary Guidelines impose a system of differentiated obligations for ordinary online intermediaries and ‘Significant Social Media Intermediaries’ (“SSMIs”). As expected, the latter have significantly more onerous responsibilities to fulfill to avail of legal immunity. While this post focuses solely on SSMIs, it is worth spending a moment on this distinction. The Guidelines define an SSMI as an intermediary which: (i) has more “registered users in India” than a limit set by the Union Government; and (ii) primarily or solely enables online interactions between users and allows them to access, upload, and share content. The Union Government has since stated that the threshold for an SSMI is 5,000,000 “registered users in India”.

As a practical issue, it is unclear exactly how “registered users in India” will be calculated. Geographic locations are easily spoofed on the internet and the requirement is bound to create a few borderline cases involving some arithmetic gymnastics to avoid being classified as a SSMI. To add to this, platforms such as Signal and Telegram do not release country specific userbase data.

Taking a step back, it is also worth examining the trend to impose additional monitoring and filtering responsibilities on large social media companies. The increased burden on SSMIs to moderate their platforms is a tacit admission that these platforms form the bedrock of public discourse or a ‘digital public sphere’. In addition to cat videos, these platforms facilitate communication between citizens and the political centre of self-governance. Thus, these platforms aid democratic legitimating by allowing citizens to participate and shape public opinion. Suspending a citizen’s social media account is the modern-day equivalent of gagging them at a town hall. Simply put, SSMIs perform a very public function in our democracy and thus their regulation must balance the government interests in preventing harm in the public sphere (e.g., defamatory, or presumptively illegal content), with the free expression rights of internet users (included here is the right to access/receive information).  

Proactive Filtering Requirement

Rule 4 sets out the “due diligence” obligations of SSMIs. Rule 4(4) states that an SSMI:

shall endeavour to deploy technology-based measures, including automated tools or other mechanisms to proactively identify information that depicts any act or simulation in any form depicting rape, child sexual abuse or conduct, whether explicit or implicit, or any information which is exactly identical in content to information that has previously been removed or access to which has been disabled on the computer resource of such intermediary under clause (d) of sub-rule (1) of rule 3, and shall display a notice to any user attempting to access such information stating that such information has been identified by the intermediary under the categories referred to in this sub-rule.

The obligation in Rule 4(4) thus creates a best efforts obligation on SSMIs to utilise automated tools to proactively identify two classes of content. First, content that explicitly or implicitly depicts rape or child sexual abuse, and second, content that is exactly identical to content that has been taken down pursuant to Rule 3(1)(d). Under Rule 3(1)(d), content may be taken down pursuant to a court order or a government order (as noted here, there are virtually no limits on what the government may block). Although the provision does not expressly require an intermediary to block access to a page identified as hosting such content, absent a clear government clarification that users will still be permitted to view the flagged content, we may presume that the intermediary’s notice flagging the content will replace the web-page’s actual content – effectively blocking the content. (Sidebar | As takedowns under court or government orders themselves are haphazard and regularly not disclosed to the public, it is unclear what “categories” the user-facing notice will use, drastically reducing its usefulness as a transparency aid.)

Overcompliance and Monitoring

Rule 4(4) provides intermediaries a choice, they must either monitor their platforms for select types of illegal content and replace it with red-flag, or alternatively, they can forgo the legal immunity offered by Section 79(1) and risk being held secondarily liable for facilitating the spread of illegal content. Despite the musings of certain commentators, the impact of giving intermediaries such a choice should be obvious. As Jack Balkin notes, this approach ‘raises problems of collateral censorship and digital prior restraint.’ In other words, conditioning legal immunity on the effectiveness of a platform’s monitoring and takedown of illegal content is bound to make platforms overly cautious. They are likely to takedown content that could potentially be illegal but may not necessarily be so if adjudicated upon. As Balkin notes, this phenomenon is inherent to systems of intermediary liability as intermediaries have no incentives to protect the speech of unknown third-party users over their own coffers.

The result is that intermediaries may restrict a user’s post purely to avail the legal immunity provided by Section 79(1) – because they do not want to take the risk that if the post is later found to be illegal and they had failed to take it down, they could be held liable for disseminating it. Crucially, an intermediary may takedown/block speech that a user could tenably claim is constitutionally protected speech, but the user’s claim does not sufficiently alter the risk calculus of the intermediary, which operates under the outsized threat of liability. This problem is aggravated by the oddity that users do not have free speech rights vis-à-vis online platforms (private companies) yet these platforms undeniably constitute part of the digital public sphere.

To avoid this very problem, the European Union expressly prohibits imposing a ‘general monitoring obligation’ on intermediaries to weed out illegal activity on their platforms (see Art. 15 E-Commerce Directive). In the context of requiring intermediaries to weed out copyright infringing material from their platforms, the European Court of Justice ruled as far back as 2012 that a general monitoring obligation was incompatible with the free speech rights of users as it may lead to legal content being caught in the dragnet of a web-filter. The court held that a general monitoring obligation exists where intermediaries had to install filters that: (i) were targeted at user content; (ii) applied indiscriminately to all users; (iii) were installed as a preventive measure; (iv) at the sole expense of the intermediary; and (v) for an unlimited period of time. Needless to say, Rule 4(4) ticks all the boxes of an unlawful general monitoring obligation.

Using Automated Filters

The use of automated systems is not new. Most large social media platforms already voluntarily use a host of automated tools to filter content. However, these are by no means perfect. Between Facebook’s takedown of iconic war photography, to Twitter’s suspension of an Indian lawyer’s account over the use of August Landmesser’s photo defying the Nazi salute – the mistakes of automated systems are well documented. Put simply, at their present state of development, automated systems are poor at evaluating cultural context, irony, or the critical analysis necessary to discern between illegal and legal content. Now consider the types of content Rule 4(4) requires such automated tools to block, ‘explicit or implicit depictions of rape and child sexual abuse’. Automated systems are unlikely to accurately determine the various layers of nuance associated with such content which may range from actual footage of child sex abuse to a critical commentary on sexual violence.

This issue is exponentially aggravated when the second target for automated filters is considered. Content may be taken down under Rule 3(1)(d) for a host of legal and legal but unconstitutional reasons. A court may pass an order to takedown defamatory content under Rule 3(1)(d). The Government may also pass an overbroad order blocking large swathes of content under Rule 3(1)(d). The effect is to create an ever-growing database of illegal content (infinitely more complex than even depictions of rape or child sex abuse) that automated systems must continually weed out. Furthermore, under Rule 3(1)(d) specific URLs are restricted. If an automated system is unable to discern exactly what content on that web-page was illegal, there exists a significant chance of collateral censorship as all content on that originally blocked page may be flagged as illegal and automatically blocked going forward. Finally, there are no safeguards to ensure that once content is no longer restricted under Rule 3(1)(b) (if a court modifies or the government withdraws an order), automated systems are immediately turned off.

Although Rule 4(4) uses the term “proactively”, it is unclear whether SSMI’s will be obligated to restrict content prior to publication even. A filtering system that restricts content prior to publication fundamentally privileges the interests in favour of restraining speech. In other words, it treats all speech identified by the filter as presumptively illegal, contrary to settled free speech doctrine. Lastly, even if the content is published, a user must be given the right to contest the application of the filter before it is taken down. While Rule 4(8) of the Intermediary Guidelines does postulate an ex-ante dispute resolution mechanism, it does not apply to content taken down pursuant to the automatic filtering under Rule 4(4). This is amplified when the relevance of the content is time sensitive, and no content-recipient has been identifiably injured. Famously, in Shreya Singhal, the Supreme Court struck down a requirement that intermediaries takedown content at the behest of a private complaint absent judicial adjudication. Where this leaves a requirement that intermediaries takedown content even prior to a private complaint will have to be seen.

Privacy

The use of automated filters also raises issues surrounding user privacy. Imposing a general monitoring obligation on intermediaries to constantly weed out “illegal” content requires them to conduct an ongoing analysis on all user content posted on their platforms. This exercise is likely to lead to a large dataset that links user profiles to any “illegal” content posted by them. Consider a user profile which repeatedly posted content that violated a ‘depictions of rape’ filter or ‘farmer protests’ filter – the intermediary would be in possession of this information which may: (i) be requisitioned by the government; or (ii) be hacked or leaked and made available to the public devoid of any context.

Parchment Barriers

The Union Government is not unaware of these harms, and Rule 4(4) includes three safeguards aimed at preventing these issues. First, the measures taken by intermediaries must be “proportionate” and must consider the ‘interests of free speech and privacy of users.’ Second, intermediaries shall implement “mechanisms of human oversight … including a periodic review of any automated tools deployed”. Lastly, the intermediary “shall evaluate the automated tools having regard to the accuracy and fairness of such tools, the propensity for bias and discrimination in such tools, and the impact on privacy”.

On paper, these are excellent considerations to have when deciding whether to use automated filtering. However, as regulatory text, it lacks the necessary specificity or enforceability to reign in the dangers of automated systems. None of the three safeguards provide any indication as to when an intermediary is in breach of them. How much oversight and review are necessary to maintain immunity? What is the level of efficacy the automated systems must achieve and what is a tolerable error rate? What forms of data collection for the purposes of such systems are impermissible? Additionally, the Guidelines and the IT Act fail to provide any form of regulatory oversight. There is no dedicated body to police these incredibly complicated systems being unleashed on Indian users, meaning that in the real world – compliance will be almost impossible to ensure on an ongoing basis.

Automated systems are not unregulatable. There exist several real world proposals on how to monitor such systems such as vetting the source code, black boxes, and perhaps most promisingly – ‘tinkering’. This involves systematically testing how such filters respond to sample content on an ongoing basis. However, such solutions require a regulatory framework and committed agencies. The safeguards set out in Rule 4(4) are more akin to parchment barriers – laudable on paper but easily circumventable by both intermediaries and the government in practice.

The so-called safeguards also raise the spectre of a more sinister threat that is now well-established in India’s legal system: that of ‘discriminatory legalism’. Simply put, the legal uncertainty and lack of a dedicated and independent agency to oversee SSMI filtering opens the door for the government to selectively prosecute SSMIs for either violating the filtering obligation or violating the safeguards. For example, if the SSMI does not takedown content unfavourable to the ruling party, a violation of the web-filtering may be alleged, and if the SSMI takes down content favourable to the ruling party, a violation of the safeguards may be alleged.  

Conclusion

This is the first post of a series examining the impact of the new Intermediary Guidelines on the digital public sphere in India. Requiring SSMIs to take an increased role in content moderation of public discourse is not an unreasonable demand. The deregulatory atmosphere in which online intermediaries emerged means that private companies now perform the core democratic function of facilitating public discourse and regulation must recognise this. The large volume of content on these platforms also means that the use of automated filters is inevitable.

However, this set of posts critically examines whether the form of regulation adopted by the Intermediary Guidelines is the best way of ensuring a vibrant digital public sphere. Making intermediaries’ legal immunity conditional on them ceaselessly monitoring content fundamentally alters their risk calculus towards content takedown. The lack of well-conceived regulatory framework to address the risks of automated filtering by social media companies leaves Indian citizens more at the mercy of large tech companies and government interference than ever. A diametrically opposite, but equally feasible approach suggested by Martin Husovec is to fine intermediaries for over-zealously taking down content. This incentivises intermediaries to invest in improving their content removal systems, as opposed to India’s approach which, at best, allows the social cost of lost speech to go unaddressed – and at worst, incentivises intermediaries to takedown more content than ever.

One thought on “Intermediary Guidelines and the Digital Public Sphere: Automated Filtering

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s