Authored By: Rupal Barjatya
Symbiosis Law School, Pune
ABSTRACT
The proliferation of social media platforms has coincided with a disturbing rise in mob lynching incidents across India, often triggered by misinformation and hate speech circulated online. This article examines the liability of social media intermediaries under Section 79 of the Information Technology Act, 2000, particularly in light of recent amendments and evolving jurisprudence. It analyses the delicate balance between intermediary safe harbour provisions and accountability for user-generated content that incites violence. Through examination of legislative frameworks, judicial pronouncements, and recent regulatory developments, this article argues that while safe harbour protections remain necessary for digital innovation, stronger due diligence requirements and proactive content moderation obligations are essential to prevent the spread of inflammatory content that leads to mob violence.
INTRODUCTION
India has witnessed a troubling surge in mob lynching incidents over the past decade, with social media platforms frequently serving as catalysts for violence. WhatsApp rumours about child kidnappers, Facebook posts spreading communal hatred, and Twitter threads inciting vigilante justice have repeatedly preceded brutal attacks on innocent individuals. The 2018 lynching cases in Assam and Maharashtra, triggered by forwarded WhatsApp messages falsely identifying victims as child abductors, starkly illustrated how digital misinformation can translate into real-world violence.
This phenomenon raises critical questions about the responsibility of social media intermediaries in controlling content that incites mob violence. Section 79 of the Information Technology Act, 2000 provides a safe harbour to intermediaries, exempting them from liability for third-party content. However, recent amendments through the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, and subsequent modifications have sought to recalibrate this immunity by imposing enhanced due diligence obligations.
The central issue examined in this article is whether the current legal framework adequately addresses platform liability for content that leads to mob lynching, and whether recent amendments strike an appropriate balance between protecting free speech, fostering digital innovation, and preventing violence incited through social media.
LEGAL FRAMEWORK
The Information Technology Act, 2000 and Section 79
Section 79 of the IT Act provides the foundational safe harbour protection for intermediaries in India. Under Section 79(1), intermediaries are not liable for any third-party information, data, or communication link made available or hosted by them. This immunity, however, is conditional upon compliance with Section 79(2), which requires intermediaries to observe due diligence and not conspire, abet, aid, or induce unlawful acts. Section 79(3) further elaborates that intermediaries lose their safe harbour protection upon receiving actual knowledge of unlawful content, either through court orders or government notifications. The provision was modelled after the safe harbour regime in the United States under Section 230 of the Communications Decency Act, though with significant differences in implementation and scope. 1
The IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
The 2021 Rules marked a paradigm shift in intermediary regulation in India. These rules impose several significant obligations on social media intermediaries, particularly those classified as “significant social media intermediaries” (platforms with over 50 lakh registered users in India).2 Key provisions include:
Due Diligence Requirements: Intermediaries must publish rules and regulations governing user access, inform users not to host unlawful content, and establish grievance redressal mechanisms with monthly compliance reports.
Content Removal Timelines: Upon receiving complaints, intermediaries must acknowledge within 24 hours and dispose of complaints within 15 days. For certain categories of content, including content depicting individuals in full or partial nudity or sexual acts, removal must occur within 24 hours.
Traceability Mandate: Significant social media intermediaries must enable identification of the first originator of information, particularly for messages that undermine national security or public order. This provision has been particularly controversial in the context of encrypted messaging platforms.
Proactive Monitoring: Intermediaries must deploy technology-based measures, including automated tools, to proactively identify and remove unlawful content, especially content depicting child sexual abuse material.
CONSTITUTIONAL FRAMEWORK
Article 19(1)(a) of the Constitution guarantees freedom of speech and expression3, but Article 19(2) permits reasonable restrictions on grounds including public order, decency, morality, and incitement to an offense.4 The Supreme Court in Shreya Singhal v. Union of India (2015)5 struck down Section 66A of the IT Act for being unconstitutionally vague and violative of Article 19(1)(a), establishing that restrictions on online speech must meet the same constitutional standards as offline speech.
Additionally, Sections 153A, 295A, 505, and 506 of the Indian Penal Code criminalize hate speech, promotion of enmity between groups, and criminal intimidation – all provisions relevant to content that may trigger mob lynching.6
JUDICIAL INTERPRETATION
Shreya Singhal v. Union of India (2015)
This landmark judgment fundamentally shaped intermediary liability in India. The Supreme Court struck down Section 66A of the IT Act while reading down Section 79 to provide clarity on when intermediaries lose safe harbour protection. The Court held that intermediaries gain actual knowledge only through court orders or government notifications under Section 79(3)(b), not through private complaints. This judgment established that intermediaries are not required to proactively monitor content and can only be expected to act upon receiving such formal knowledge.7
However, the Court also recognized that Section 79(3) contemplates removal of unlawful content upon gaining actual knowledge, thereby acknowledging that safe harbour is not absolute when platforms become aware of illegality.
Tehseen S. Poonawalla v. Union of India (2018)
In response to the wave of mob lynching incidents, particularly those triggered by rumours spread through social media, the Supreme Court issued preventive and remedial directions. While this case did not specifically address platform liability, the Court directed Parliament to create a separate offense for mob lynching and required state governments to take preventive measures, including designating senior police officers to prevent incidents.
The Court recognized that WhatsApp and social media platforms were being used to spread rumours leading to violence, implicitly acknowledging the role of digital platforms in facilitating mob violence, though it stopped short of imposing direct obligations on intermediaries.8
Various High Court Proceedings
Multiple High Courts have dealt with challenges to the 2021 Intermediary Rules, particularly concerning the traceability requirement and proactive monitoring obligations. While these cases remain pending or have been transferred to the Supreme Court, they raise fundamental questions about whether enhanced due diligence requirements effectively override the safe harbour protection established under Section 79 and interpreted in Shreya Singhal.
CRITICAL ANALYSIS
The Safe Harbour Dilemma
The fundamental tension in intermediary liability law lies in balancing innovation with accountability. Complete immunity encourages platform growth and protects free expression but may enable harmful content proliferation. Conversely, excessive liability chills speech and innovation, potentially transforming intermediaries into censors who over-remove content to avoid legal consequences.
In the context of mob lynching, this dilemma becomes acute. The viral nature of inflammatory content on platforms like WhatsApp, where end-to-end encryption prevents platform monitoring, creates situations where harmful misinformation spreads rapidly before any intervention is possible. The 2018 lynching incidents demonstrated that even after platforms took remedial action, such as WhatsApp limiting message forwarding, the damage had already occurred.
Inadequacy of Notice-and-Takedown
The notice-and-takedown framework under Section 79, as interpreted in Shreya Singhal, is inherently reactive. By the time a court order or government notification is issued, inflammatory content may have already reached millions and triggered violence. The 24-hour and 15-day timelines under the 2021 Rules, while faster than judicial processes, remain insufficient when content can incite mob violence within hours of posting.
Moreover, the requirement of court orders or government notifications for establishing actual knowledge creates procedural bottlenecks. Law enforcement agencies, already overburdened, may lack the technical expertise or resources to identify and report such content promptly.
The Traceability Paradox
The traceability mandate under the 2021 Rules aims to identify originators of harmful content, theoretically enabling accountability for those spreading inflammatory messages. However, this requirement faces significant implementation challenges, particularly for encrypted platforms. Breaking encryption to enable traceability potentially compromises user privacy and security for all users, not just those spreading harmful content.
Furthermore, traceability alone does not prevent mob lynching. Identifying the message originator after violence has occurred serves investigative purposes but fails to address the preventive dimension that should be the primary objective of platform regulation in this context.
Global Comparative Perspectives
Germany’s Network Enforcement Act (NetzDG) requires platforms to remove manifestly unlawful content within 24 hours and other unlawful content within seven days, with heavy
penalties for non-compliance.9 While this approach ensures faster content removal, it has been criticized for encouraging over-removal and privatizing censorship.
The European Union’s Digital Services Act creates a comprehensive framework distinguishing between different sizes of platforms and imposing proportionate obligations, including systemic risk assessments for very large platforms. This tiered approach recognizes that liability frameworks should vary based on platform size and reach.10
Australia’s Online Safety Act provides the eSafety Commissioner with extensive powers to require content removal, particularly for cyberbullying and image-based abuse. This centralized regulatory approach contrasts with India’s reliance on judicial intervention for content removal.11
These international models suggest that effective regulation requires clear definitions of unlawful content, swift removal mechanisms, proportionate obligations based on platform size, and independent oversight rather than complete reliance on government notifications or court orders.
RECENT DEVELOPMENTS
2023 Amendments and Clarifications
Following extensive litigation and criticism of the 2021 Rules, the government has issued various clarifications regarding intermediary obligations. In 2023, the Ministry of Electronics and Information Technology clarified that the traceability requirement applies only to significant social media intermediaries and only for specific categories of content affecting national security and public order.
Additionally, the government has established a Grievance Appellate Committee to hear appeals against intermediary decisions on content removal, providing users with recourse beyond approaching platforms directly. This institutional mechanism represents progress toward more structured content moderation governance.
Platform Responses
Major platforms have implemented various measures to combat misinformation leading to mob violence. WhatsApp has limited message forwarding, introduced forward labels, and partnered with fact-checking organizations. Facebook and Instagram have deployed artificial intelligence tools to identify hate speech and implemented content warning systems for sensitive material.
However, these voluntary measures, while beneficial, remain inconsistent across platforms and lack external oversight. The effectiveness of automated content moderation in identifying context-specific hate speech in India’s multilingual environment remains questionable.
Pending Legislative Proposals
The Digital India Act, currently under consultation, proposes to replace the IT Act with a comprehensive framework addressing contemporary digital challenges. Early discussions suggest it may introduce concepts like platform accountability for algorithmic amplification of harmful content, recognizing that recommendation systems play a crucial role in content virality.
Additionally, proposals for a separate law criminalizing mob lynching, as suggested in Tehseen Poonawalla, remain under consideration, though legislative action has been slow.
SUGGESTIONS AND WAY FORWARD
Redefining Actual Knowledge
The law must evolve beyond the binary framework of safe harbour with or without actual knowledge. A graduated liability model could hold intermediaries accountable based on their actions after being made aware of harmful content through various mechanisms including complaints from affected users, reports from trusted flaggers, or detection through their own systems, not merely court orders or government notifications.
Establishing “constructive knowledge” standards where platforms deploy content moderation systems could incentivize proactive measures without completely eliminating safe harbour protections.
Strengthening Grievance Redressal
The grievance redressal mechanism under the 2021 Rules requires enhancement. Grievance officers must be empowered with clear guidelines on content categories requiring priority
action, including content that may incite violence. Training programs on India’s socio-cultural context, communal sensitivities, and patterns of incitement leading to mob violence are essential.
Furthermore, the Grievance Appellate Committee should be adequately resourced and transparent in its decision-making, publishing regular reports on appeals and their outcomes to enable public scrutiny and build jurisprudence around content moderation decisions.
Context-Aware Content Moderation
Automated content moderation tools must be trained on India-specific datasets encompassing regional languages, cultural contexts, and coded language often used to incite communal violence. Platforms should be required to invest in human moderators familiar with local contexts who can make nuanced judgments that algorithms may miss.
Collaboration between platforms, civil society organizations, and academic institutions can help develop more sophisticated detection systems for content that may trigger mob violence, considering factors like the identity of posters, their reach, the historical context of content, and geographical patterns of previous incidents.
Transparency and Accountability Mechanisms
Mandatory transparency reports detailing content removal requests, response times, and outcomes should be standardized across platforms. These reports must include specific categories for hate speech, incitement to violence, and misinformation that could lead to physical harm.
Additionally, independent social audits of platform content moderation practices by civil society organizations and academic researchers can provide external validation and identify systematic failures in addressing inflammatory content.
User Education and Digital Literacy
Regulatory interventions must be complemented by user education initiatives. Government platform partnerships for digital literacy campaigns teaching users to identify misinformation, verify content before forwarding, and understand the consequences of spreading unverified information are crucial for long-term solutions.
Schools and educational institutions should integrate digital citizenship education into curricula, emphasizing responsible social media use and critical evaluation of online information.
Law Enforcement Capacity Building
Police departments and law enforcement agencies require training and resources to monitor social media trends, identify content that may trigger mob violence, and coordinate with platforms for swift content removal. Dedicated cyber cells with linguistic expertise and cultural sensitivity can serve as bridges between law enforcement and digital platforms.
Establishing standardized protocols for emergency content removal requests from law enforcement, with appropriate judicial oversight to prevent abuse, can enable faster responses to imminent threats while protecting constitutional rights.
CONCLUSION
The intersection of mob lynching and social media presents one of the most challenging issues in contemporary internet governance. Section 79 of the IT Act, conceived in 2000, struggles to address the complexities of modern social media platforms where content virality, algorithmic amplification, and user behaviour patterns create unprecedented risks of real-world violence.
The 2021 Intermediary Rules and subsequent amendments represent significant steps toward enhanced platform accountability, but gaps remain. The fundamental challenge lies in crafting a legal framework that prevents violence incited through social media without undermining the safe harbour protections essential for digital innovation or empowering excessive state censorship.
Moving forward, India must develop a nuanced approach that recognizes different types of intermediaries, imposes proportionate obligations, creates effective grievance mechanisms, and fosters collaboration between platforms, government, civil society, and users. The objective should not be to eliminate intermediary safe harbour but to condition it upon meaningful due diligence that includes proactive measures against content likely to incite violence.
As digital platforms become increasingly central to public discourse and information dissemination, ensuring they serve as spaces for legitimate expression rather than catalysts for violence is not merely a legal imperative but a societal necessity. The law must evolve to meet this challenge while preserving the constitutional values of free speech and expression that underpin India’s democracy.
Reference(S):
1 The Information Technology Act 2000
2 The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021
3 The Constitution of India 1950, art 19(1)(a)
4 The Constitution of India 1950, art 19(2)
5 Shreya Singhal v. Union of India, (2015) 5 SCC 1
6 The Indian Penal Code 1860, ss 153A, 295A, 505, 506
7 Shreya Singhal v. Union of India, (2015) 5 SCC 1
8 Tehseen S. Poonawalla v. Union of India, (2018) 9 SCC 501
9 Network Enforcement Act (NetzDG), Germany [2017]
10 Digital Services Act, European Union [2022]
11 Online Safety Act, Australia [2021]





