Social Media 
Columns

Social Media Transparency: Schrödinger's cat?

While the IT Rules have attempted to bring out transparency in the practice of SSMIs, they continue to operate in an opaque manner and the law needs to be revisited.

Anandita Mishra, Krishnesh Bapat

Big Tech, especially Facebook, is notoriously opaque, and the law as it stands currently does not enable disclosure. The extent of Facebook’s opacity is such that during the hearing of the Delhi Peace and Harmony Committee, its head of public policy Shivnath Thukral refused to detail the measures the social media giant took in response to communally sensitive posts which were amplified on its platform during the Delhi Riots of 2020.

Previously, a former employee-turned whistleblower Frances Haugen revealed that Facebook selectively moderated content and took action against only 3-5% of hate speech on the platform.

Such opacity enables Facebook and other such intermediaries to prioritise profit-making over the well-being of their users. For example, Facebook has consistently chosen not to allocate resources towards moderating non-English content - a fact which is in the public domain courtesy of Haugen. Opacity also prevents researchers, lawmakers and regulators from formulating effective policies to govern social media platforms.

Facebook’s refusal to engage with the Committee prevents the Delhi Assembly from deliberating on how to prevent incitement of violence on platforms. Moreover, as a consequence of this opacity, social media intermediaries have become the real arbiters of online speech.

Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021

In a world where social media intermediaries exercise immense influence, it is worth exploring how the law has sought to increase transparency. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules, 2021) seek to hold social media companies accountable but fail to achieve meaningful transparency. Rule 4(1)(d) of the IT Rules, 2021 requires significant special media intermediaries (SSMIs) to publish monthly compliance reports. In these reports, the platforms are required to:

1. Mention the details of complaints received and actions taken thereon, and

2. The number of specific communication links or parts of information that the social media platform has removed or disabled access to by proactive monitoring.

Compliance reports by Big Tech

Google, Facebook, Instagram, WhatsApp and Twitter have been publishing compliance reports under the IT Rules, 2021. As per these reports, SSMIs seem to have taken down more content proactively using Artificial Intelligence (AI) as compared to takedowns on the basis of user complaints. However, since they don’t disclose the algorithms they use to take down content, it is not possible to determine whether the AI successfully removes only illegal content.

Facebook’s reporting around proactive monitoring is also misleading. In its reports, it adopts the metric of ‘proactive rate’ which refers to the percentage of ‘content actioned’ that they detected proactively before any user reported the same. This metric is deceiving because the proactive rate only gives a percentage of that content on which action was taken, and excludes all content on the platform (which may otherwise be an area of concern) on which action was not taken i.e. content that Facebook’s AI missed.

This problem in the metric becomes glaring in light of the documents leaked by Frances Haugen which show that Facebook has boasted of proactive removal of over 90% of identified hate speech in its “transparency reports” when internal records showed that “as little as 3-5% of hate” speech was actually removed. These documents confirm what civil society organizations have been asserting for years - that Facebook has been fuelling hate speech around the world because of its failure to moderate content and that the use of its algorithms amplify inflammatory content.

We cannot blindly trust SSMIs

Other concerns also remain. For one, the law does not provide any mechanism to verify the disclosures made by these SSMIs. Permitting the government to audit intermediaries may result in an honest compliance report, but will have a chilling effect on the functioning of the platform. As Twitter said in response to the Attorney General of Texas’s demand for internal documents on content moderation, "any time a Twitter employee thinks about writing something related to content moderation, the employee knows that the AG has demanded access to it," and that would lead them to "think twice about what to write or what editorial decisions to make and document." Perhaps the solution lies in appointing a third-party auditor or establishing an independent regulator to oversee social media platforms. An audit by an independent third party would ensure fair investigation without compromising with the autonomous functioning of the platforms and would keep governmental interference at bay.

There are also concerns over the manner in which intermediaries choose to report. Again, Facebook, in addition to using a problematic metric for reporting proactive monitoring, states that it provided tools to users in 461 cases out of 519 complaints received in November, but does not exhaustively reveal what these tools were or whether the users were actually satisfied with Facebook’s response. Similarly, we also do not know why Facebook took certain measures instead of others.


Transparency should enable people to understand how the Facebook platform is governed. One way of achieving meaningful transparency is by asking the intermediaries to reveal how many individuals they deploy to fact-check posts on their platforms and how much time they spend on each post - a question Facebook refused to answer before the Delhi Peace and Harmony Committee. Interestingly, the IT Rules were floated following a call for attention for “misuse of social media platforms and spreading of fake news”, but there seems to be no data disclosure on content takedowns for fake news in the compliance reports submitted by any SSMI other than Twitter.

How to achieve meaningful transparency?

There is scope for overcoming these concerns. Rule 4(1)(d) of the IT Rules, apart from mandating social media intermediaries to reveal information mentioned above, also requires them to provide ‘any other relevant information as may be specified’, presumably by the Ministry of Electronics and Information Technology (MeitY). A similar provision exists in Rule 18(4) where the Ministry of Information and Broadcasting (MIB) has the power to ask for ‘additional information from the publisher as it may consider necessary’. In exercise of this power, the MIB issued a public notice dated May 26, 2021, specifying a particular format for furnishing of information by publishers.

MeitY can similarly seek existing and additional information in a systematic manner by prescribing a format to be followed by SSMIs while reporting under Rule 4(1)(b). ‘Details of complaints received and action taken thereon, cannot be limited to merely the number of complaints and responses to them, as compliance reports published till now would make one believe. The Union government can ask intermediaries to explain to their users why, how and how long grievance redressal took.

Given the growth in demand for greater due process and transparency in content moderation, the Santa Clara Principles On Transparency and Accountability in Content Moderation have recently been updated to version 2.0. The first principle suggests how data can be segregated by category of rule violated, by format of content at issue (e.g., text, audio, image, video, live stream), by source of flag (e.g., governments, trusted flaggers, users, different types of automated detection) and by locations of flaggers and impacted users (where apparent). The first principle along with the other operational principles - notice and appeal - incorporated therein could be a starting point for the government to provide for a format in which the SSMIs are required to furnish information.

This approach would enable the government to collect and analyse India-specific data which would further enable well-rounded regulation.

Anandita Mishra and Krishnesh Bapat are currently working as Associate Litigation Counsels at the Internet Freedom Foundation.

RG Kar rape and murder: West Bengal court frames murder, rape charges against accused Sanjay Roy

97 lawyers apply for Senior Advocate designation at Karnataka High Court

IP Rights: What’s in a name?

Supreme Court publishes calendar for 2025; summer holidays rechristened 'partial working days'

Bail order cannot be challenged via revision plea: Bombay High Court

SCROLL FOR NEXT