Social Media & Intermediary Liability: Missing the Forest for the Trees?

Recent events have once again brought into focus the question of imposing legal liability on online intermediaries and, particularly social media companies. In the United States, Twitter’s decision to ‘flag’ President Trump’s tweet disparaging vote-by-mail procedures as inaccurate prompted the President to issue an executive order re-considering the qualified immunity granted to intermediaries (here). In India, Twitter voluntarily and independently ‘disabled’ two tweets by advocate Prashanth Bhushan upon the initiation of contempt proceedings against the lawyer (here). This, while India is currently in the process of amending its rules under the Information Technology Act (“IT Act”) regulating online intermediaries (the “Intermediary Guidelines”).

The need to shield online intermediaries from liability to protect freedom of expression on the internet is well established. India’s new regulation seeking to make intermediaries monitor and take-down content is a step back in this respect. But the proposed guidelines aside, in this post I argue that a regulatory focus on intermediary liability by the government ignores several larger structural issues with speech on the internet (especially on social media websites) and potentially hampers more robust and multi-faceted regulatory approaches. I begin by briefly setting out India’s intermediary regime (both existing and proposed) and the need to shield intermediaries from immunity. I then attempt to sketch out the role of large social media companies in structuring speech on the internet and how an undue focus on intermediary liability further empowers already powerful actors at the cost of internet consumers and free speech. Without going so far as ‘breaking up big tech’, I explore possibility regulatory measures that can counteract the power of social media companies over users’ speech.

Intermediary Immunity Grossly Simplified

Given the decentralised nature of the internet, online intermediaries have long been targets for legal liability for allegedly unlawful speech on the internet. Traditionally a “tort law principle of secondary liability for third party action” is applied against intermediaries. Simply put, a website may be sued for hosting unlawful content even though the website itself did not create or post the content (secondary liability), the unlawful content actually having been created and posted by an anonymous web-user (third party action or content).

Government’s however, quickly recognised that exposing intermediaries to this liability may make them extremely cautious and cause them to start deleting all third-party content that carries even a slight risk of legal liability. Not ideal for online business or free speech. Therefore, governments provided “immunity” or “safe harbour” to intermediaries except in narrowly defined situations. For example, Section 79 of the IT Act provides online intermediaries legal immunity for unlawful third party content if: (i) the content is merely temporarily stored or transmitted on the site; or (ii) if the intermediary takes down the content upon receiving “actual knowledge” of the unlawful content or upon being notified by the Government; or (iii) compliance with the Intermediary Guidelines more generally.

In an exceedingly rare moment of clarity, the Indian Supreme Court in Shreya Singhal held that online intermediaries could not be tasked with determining when content was legal or not, and “actual knowledge” meant a court order directing the take-down of unlawful content. In other words, intermediaries would only ever be subject to legal liability if a court of law directed them to take-down content and they still refused to do so. (Note: this makes Twitter’s “disabling” of Prashanth Bhushan’s tweets an entirely voluntary act as there existed no court order directing the take-down. What it says about Justice Arun Mishra asking Twitter why it had not taken down the tweet is best left to the reader’s imagination.)

Proposed Amendments

As the intermediary’s “safe harbour” or shield against liability for hosting third party content is dependent on compliance with the Intermediary Guidelines, the content of these guidelines is incredibly important. As the Software Freedom Law Centre has reported (here), India’s new Intermediary Guidelines make continued immunity contingent on several problematic conditions, namely: (i) mandatory upload filters; (ii) traceability; (iii) a local incorporation requirement; and (iv) a twenty-four hour take-down requirement. These requirements are undeniably problematic, cumulatively restricting, and chilling speech. For example, an upload filter would force intermediaries themselves to judge the legality of content before it is published (flying directly in the face of the reasoning in Shreya Singhal). Even worse, upload filters shift the burden on the speaker to justify why their speech is not unlawful, rather than requiring a person offended by the speech or the government to justify why the speech should be taken down. This effectively makes restricting speech the norm and free speech an exception to that norm.

The proposed amendments to the Intermediary Guidelines warrant alarm bells being raised and interested readers should go through SFLC’s report. However, the focus of this post is to explain why the government’s focus on intermediary liability itself is misguided.  

The Bigger Picture

The renewed political impetus to regulate intermediaries is a result of the new dual role of large internet companies, particularly social media companies. As Jack Balkin notes, large social media companies not only make available speech for our consumption but also curate the speech that we consume. For example, not only does Twitter allows a user to see the speech of millions of other users, but by selectively ranking, editing, and removing content Twitter also determines what speech a user sees and does not see. This second role of curation cannot be performed without the intermediary (e.g. Twitter) having its own substantive view on what speech is good speech and what speech is bad speech.

Social media companies often argue that they are content neutral, or that speech is tailored based on a user’s own interests. However, this is simply not bourne out in practice. For example, when President Trump stated that vote-by-mail ballots were unsafe, Twitter ‘flagged’ it as potentially misleading, but Facebook carried the President’s statement as is. Simply put, Twitter’s substantive view on speech in the context of elections was different from Facebook’s. Therefore, despite granting intermediaries immunity, the reality is that large intermediaries voluntarily perform an editorial (or curating) function that determine what speech should be on their platform on what speech should not. These are often referred to as a platform’s “community guidelines”.

This voluntary curating function coupled with the massive market share of existing social media companies raises a significant issue. With the internet presently structured around just two or three social media companies, the probability that an individual citizen’s substantive view on good and bad speech will diverge from the social media company’s view on speech is extremely high. The most obvious manifestation is when a website takes down what a user may see as legal content, or alternatively refuses to take down what a user may see as illegal content. To be clear, it is not desirable to have content taken down merely because it is objectionable to another internet user (this is why Shreya Singhal imposed the court order requirement). However, when the user’s dissatisfaction with the social media site’s view of good and bad speech is examined in light of the user’s inability to choose another social media site or participate in the framing of the “community guidelines”, the curating role of social media companies absent any legal regulation becomes problematic.

Another way to look at this issue is that large social media companies have effectively created bottlenecks for speech on the internet, of which they are the sole unregulated gatekeepers. Gatekeeper functions are performed by most publishers for example, a magazine may refuse to publish an author’s article because of the magazine’s political views. However, the essential role played by social media companies in facilitating speech on the internet and the tiny number of companies involved creates a huge asymmetry of power between internet users and social media companies where an internet user cannot migrate to another social media platform in the same way an author can find another magazine to publish in. If a user wishes to participate in speech on the internet, they must subject themselves to the social media company’s views on speech in the form of arbitrarily enforced community guidelines. For example, the German Federal Court recently ruled that Facebook users were faced with a “false choice” between handing over huge amounts of private data to Facebook or not using the company’s ubiquitous social media service (here). In other words, internet users cannot ‘choose not to use Facebook’ because of its centrality to speech on the internet. The same dependence is also true of downstream companies and people who rely on social media companies for certain services (e.g. app developers for Apple’s App Store or YouTube’s content creators). This imbalance of power and the often arbitrary actions of intermediaries themselves has created the impetus for government’s to step in and seek to impose new rules that would make the voluntary editorial function carried out by intermediaries more acceptable to the citizen’s (or government’s) substantive view on speech.

Lastly, a user’s legal recourse against intermediaries is extremely limited. For example, in 2019 Twitter disabled senior lawyer Sanjay Hegde’s Twitter account over: (i) the use of August Landmesser’s photo defying the Nazi salute; and (ii) retweeting a poem by a CPI (Marxist-Leninist) politician – incidentally the original tweet was not taken down by Twitter. Hegde took Twitter to court alleging a violation of his free speech rights and a breach of Twitter’s own community guidelines. Twitter argued that as a private entity it was not obligated to guarantee Article 19(1)(a) rights. While there may exist a case for a contractual breach of the community guidelines, the episode highlights how even where internet users have the means and know-how to challenge an intermediary’s voluntary curating function, the law is ill suited to ensure recourse.  

Meaningful Regulation

Recall that intermediaries have always been soft targets for regulating speech online because they represent entities that the law can identify, regulate, and penalise in the otherwise decentralised world of the internet. India’s proposed new Intermediary Guidelines seek to make intermediaries even easier to identify and regulate (a local incorporation requirement) and opens intermediaries up to legal liability if their view of speech does not comport to the government-imposed norm (upload filters). The problem with this approach from a free speech perspective is that using legal liability as a threat to force intermediaries to take greater responsibility for online expression will likely lead to the systematic over-removal of legitimate speech. For example, Twitter did not wait for a court order to remove Prashant Bhushan’s tweets, as it was legally entitled to do under the Shreya Singhal ruling. Irrespective of whether an intermediary’s community guidelines are lax or strict, the spectre of legal liability forces intermediaries to be extremely cautious and remove speech that may not be unlawful. Worse, the high cost of upload filters and local incorporation requirements automatically privilege large intermediaries such Facebook and Google over smaller companies. Therefore, a regulatory approach focussed on intermediary liability not only fails to address the power imbalance between online intermediaries and their users, it further empowers existing intermediaries and incentivises them to be more aggressive in their voluntary curating function.

Understanding the problem created user-dependence on social media companies to speak on the internet, but also recognising that weakening “safe harbour” for intermediary immunities may not be a cogent response, government regulation must be more creative. “Breaking up big data” has become an increasingly common demand amongst certain politicians. Without going into the merits of a government mandated break-up of companies such as Facebook, Google, and Amazon, less drastic steps may be possible. It is also important to recognise that the harms created by large online intermediaries are not identical. For example, Facebook and Twitter may act as bottlenecks for free speech on the internet. Amazon has been accused of using its dual-role as a producer and a sales-platform to discriminate against sales-partners. Apple has been accused of discriminating against app-developers prior to apps can be listed on the App Store (the only way developers can supply their apps to users). Charges have been levied against Google for rigging its page-rank system to ensure that competitor services do not appear in Google’s ubiquitous search results. These diverse harms will likely require individuated solutions beyond a blanket breakup of large internet companies (previous breakups of large telecommunications and steel companies have resulted in re-consolidation within a decade or two).

A regulatory response must first be able to identify where speech may be being stifled. Recognising that users are unable to migrate to alternative social media networks even when an intermediary takes down their speech without a court order, an European Digital Rights (“ERD”) position paper explicitly recommends “bottleneck power” (the ability to preserve and lock-in a user-base) as a competition law metric that online platforms should be judged by (here). This can help regulators understand when users are locked in to online speech eco-systems, resulting in online intermediaries having too much power.

To break down this power, both ERD and Balkin advocate “interoperability” as a vital step that can restore significant power to internet users. A simple form of interoperability would allow users to access social media platforms from a variety of alternate services. For example, a user can access Twitter from a third-party app (not the Twitter app). This third-party app can display tweets purely chronologically, or use a different algorithm than Twitter, allowing the user to escape Twitter’s speech curating function to a limited extent (Twitter’s ranking of tweets) and choose a third-party app that the user believes to be the most beneficial.

A more robust form of interoperability would insist on a set of common internet protocols that allow users to directly communicate between different internet platforms (e.g. a Facebook user could directly message a Twitter user). This may sound unthinkable at present, but such common standards exist for email. An internet user is free to choose between a variety of email services but is ensured that they can still mail users on other email services. As ERD notes, if I migrate from Yahoo to Gmail, I do not automatically lose all my friends, followers, or contacts, thus the threshold to migrate is low and user-dependence and lock in is mitigated. By allowing users to migrate between different social media companies easily, social media companies are incentivised to provide better services and users are free to choose a social media company best reflects their substantive view of speech and are not beholden to any one service’s “community guidelines”. For example, if I found my speech constantly falling foul of Facebook “community guidelines”, I would migrate to social media X but still be able to reach my erstwhile “friends”. This would also apply in reverse, if I felt that Facebook was not censoring content enough and I wanted an even more curated feed, I would migrate to social media Y with stricter “community guidelines”. In the long term, this would ensure more social media companies and continued interoperability (today would you leave your email service for a new service that does not allow you to send emails to users with Gmail or Yahoo or Hotmail accounts?).

It is important to note that internet companies have systematically resisted moves towards such forms of interoperability. For example, Twitter limits the number of users a third-party Twitter app can host. Neither Twitter, Facebook, or YouTube provide meaningful application programming interfaces (APIs) that would allow for a service that collates your Facebook, Twitter, and YouTube feeds. Apple openly uses a “walled garden” approach to push sales of additional Apple-only compatible hardware.

Lastly, governments should look to set up specialised tribunals or regulators that improve recourse for internet users against the actions of intermediaries. Rather than a user having to approach regular courts to allege a contractual breach of community guidelines by the intermediary, specialised tribunals offering quick and meaningful dispute resolution will also incentivise better intermediary behaviour. The online nature of these disputes is also an opportunity to potentially examine online-only dispute settlement mechanisms such as virtual tribunals or Lok Adalats.   

Conclusion

This post stemmed from two excellent articles written by Jack Balkin (here) and Lina Khan and David Pozen (here). Balkin’s initial approach was to suggest imposing fiduciary obligations on intermediaries to ensure intermediaries do not act arbitrarily or like “con-men” with respect to user data. As Khan and Pozen note, an approach that centres around the regulation of intermediaries ignores the larger realities of the internet eco-system within which intermediaries operate today. Large internet companies already owe fiduciary obligations to stockholders to maximise value, which is often done by a business model reliant on the spread of divisive, inflammatory content and eroding user privacy. For example, the New York Times reported on an individual spreading political disinformation purely to capitalise on Google ad-revenue (here). When we recognise that these social media companies also form the cornerstone of modern public discourse, the magnitude of the problem is put into perspective. As Khan and Pozen conclude, the business model matters, as do economic realities.

A regulatory approach and response that focuses entirely on whether intermediaries should be held liable for third party content is unlikely to address the harms stemming from the extreme user dependence on large social media sites. Recognising the key role social media companies play in curating speech on the internet and the outsized market share these companies possess – there is bound to be a mismatch between a user’s substantive view of speech and those available on the internet resulting in the stifling of potentially lawful speech. Recognising that users are increasingly locked in to a handful of social media eco-systems, regulation of speech on the internet should work towards dismantling the gatekeeping power of large social media companies and putting power back in the hands of individual speakers to choose platforms of their choice and reclaim public discourse.


The author is grateful to Shweta Reddy from the Centre for Internet and Society for her inputs on this post.

4 thoughts on “Social Media & Intermediary Liability: Missing the Forest for the Trees?

  1. Great read.
    The one platform that I wanted to see mentioned, and compared to others was WhatsApp, where platform plays no role in editing, filtering, prioritizing. Interoperability wouldn’t resolve the challenges of lack of traceability on encrypted chat apps, which is one of the touted reasons for instituting intermediary liability.

    • Thanks. Yes, the traceability issue is slightly distinct. As you note, on private messaging platforms such as WhatsApp and Signal the intermediary performs no editorial role and merely transmits encrypted messages between users. In this context of private messaging, the traceability requirement raises privacy concerns over the government’s ability to access an individual’s private communications. It also implicates citizen’s right to association (and the right to keep those associations private). Also, as the SFLC report argues, the traceability requirement would illegally enlarge (through delegated legislation) a power of requesting decryption which is statutorily restricted under the IT Act.

  2. No, but still, this really makes sense. If we keep encrypted messaging away (which mostly does not raise concerns of content takedown), interoperability could help restore some sort of democratic capability in social media. Like I may not want to see Facebook’s feed due to its weird curation, but my ability to comment, from my twitter, upon some Facebook post which appears embedded in my twitter feed, could be a good step. It shall promote multiplicity of platforms as well as try and bring a better model of user autonomy, in terms of using social media. This is a good one. Platform responsibility could also become prominent that way

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s