Home » Blog » Balancing Platform Power and Free Expression: A Competition-Informed Analysis of India’s Social Media Intermediary Regulation

Balancing Platform Power and Free Expression: A Competition-Informed Analysis of India’s Social Media Intermediary Regulation

Authored By: Rasika Pitale

Maharashtra National Law University, Nagpur.

Abstract 

India’s intermediary regulation framework, governed by the IT Act, 2000 and reinforced  through the 2021 Intermediary Guidelines, is primarily designed to ensure platform  accountability for unlawful or harmful third-party content. However, the regime does not  adequately address the structural dominance of major social media platforms, whose  moderation decisions—often algorithm-driven—shape digital free expression more powerfully  than statutory censorship. This article argues that platform dominance, combined with liability avoidance incentives and opaque moderation policies, produces systematic over-removal of  lawful speech, a constitutional blind spot unrestrained by Article 19(2) standards. Through  doctrinal analysis, competition law principles, and comparative study with the EU’s Digital  Markets Act (DMA), Digital Services Act (DSA), and Section 230 reform debates, this article  proposes an ex-ante transparency and neutrality-based regulatory model for dominant  intermediaries. It concludes that platform power must be regulated not only for economic  fairness but also for speech-market integrity. 

  1. Introduction 

India’s digital public sphere is no longer a mere extension of offline discourse—it is a privately  mediated marketplace of expression, governed by intermediaries that host, curate, amplify,  and suppress user-generated content at unprecedented scale. The regulatory approach toward  intermediaries in India, encapsulated in the Information Technology Act, 2000 (“IT Act”) and  the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules,  2021 (“2021 Rules”), was built to shield platforms from liability while ensuring they act  responsibly upon acquiring knowledge of unlawful content. 

The constitutional turning point in India’s digital speech jurisprudence came with Shreya  Singhal v. Union of India (2015), where the Supreme Court invalidated Section 66A of the IT  Act for vagueness, arbitrariness, and disproportionate chilling of free speech under Article  19(1)(a). The judgment reaffirmed that any restriction on speech must derive legitimacy from a clearly defined law, must satisfy the reasonableness requirement, and must fall within  the permissible grounds under Article 19(2). 

Yet, a new regulatory actor has emerged since Shreya Singhal: not the legislature, not the  executive, and not the courts—but dominant social media platforms, whose private content  moderation policies have become the de-facto code for digital expression. These platforms  determine what millions may say, read, or access online. Unlike state censorship, which is  constitutionally scrutinized, private censorship via moderation is treated as contractual  governance, even when exercised by structurally dominant intermediaries that resemble digital  infrastructure rather than optional services. 

This article argues that intermediary regulation must evolve beyond the speech-state  dichotomy to incorporate a speech-market failure lens, recognizing that platform dominance,  risk-averse compliance, advertiser influence, and algorithmic opacity collectively distort free  expression. The central thesis is that market power can suppress speech as effectively as  vague penal statutes, and therefore, competition law principles must inform intermediary  regulation. 

  1. Research Methodology (≈120 words) 

This article adopts a doctrinal and comparative research approach. Primary sources  analyzed include the IT Act, 2000, the 2021 Rules, constitutional provisions, and judicial  precedents. The article integrates competition law principles—particularly dominance,  gatekeeping, and neutrality—to assess speech distortions arising from platform power.  Comparative analysis includes EU, UK, and US intermediary regulation models to highlight  ex-ante obligations for dominant platforms, procedural moderation safeguards, and safe-harbor  reform debates. Hypothetical illustrations are used to analyze moderation incentives without  referencing proprietary or plagiarized content. 

  1. Legal Framework in India: The Safe-Harbor Regime and Due-Diligence Model 3.1 Section 79, IT Act: Conditional Immunity 

Section 79 grants intermediaries exemption from liability for third-party content if they:

  • Do not initiate transmission,
  • Do not select the receiver, 
  • Do not modify information, 
  • And observe prescribed due-diligence standards. 

This immunity reflects a knowledge-based liability model, not a proactive monitoring duty.  However, while Section 79 prevents mandatory pre-screening, the 2021 Rules operationalize  fast-track removal obligations, creating significant compliance pressure on intermediaries. 

3.2 2021 Intermediary Guidelines: Procedural Duties 

The 2021 Rules impose obligations including: 

  • Appointment of a Grievance Redressal Officer
  • Content removal within 36 hours upon court/government order, 
  • Removal within 24 hours for sensitive flagged content, 
  • And monthly compliance reporting. 

Although framed as procedural accountability, the Rules do not mandate transparency of  platform-initiated moderation or impose neutrality duties for dominant platforms. The  emphasis is on timely takedown, not accurate, fair, or reviewable moderation

3.3 Constitutional Limitation Gap 

Article 19(2) constrains state-imposed speech restrictions, but no equivalent statutory  moderation benchmarks exist for private intermediaries, even when they dominate the  speech market. Thus: 

  • The state must justify restrictions, but 
  • Platforms need not justify removals, unless challenged under consumer or contract  law, 
  • Which rarely captures systemic speech harms
  • And provides no ex-ante oversight

3.4 Market Dominance Meets Moderation Power 

Modern platforms use:

  • Automated filters, 
  • AI moderation, 
  • Shadow-banning, 
  • Demonetization, 
  • Algorithmic downranking. 

While not mandated by law, these measures become practical pre-screening, especially by  platforms seeking to avoid losing safe-harbor protection or regulatory scrutiny. For a platform  moderating content for hundreds of millions of users, even a small error rate leads to mass  suppression of lawful speech. 

The law assumes users can exit or choose alternatives—but network effects and platform  lock-in contradict this assumption, making moderation decisions more akin to  infrastructural governance than optional private enforcement

  1. Judicial Interpretation: Accountability without Market-Power Recognition

4.1 Shreya Singhal v. Union of India (2015 5 SCC 1) 

The Supreme Court: 

  • Struck down Section 66A for being vague and overbroad, 
  • Held that online speech deserves equal constitutional protection, 
  • And that chilling effects are unconstitutional. 

However, the judgment did not evaluate how intermediary dominance interacts with  moderation incentives to chill speech privately. 

4.2 MySpace Inc. v. Super Cassettes Industries Ltd. (2017 236 DLT 478 Del) 

The Delhi High Court reaffirmed that intermediaries are not required to monitor content  proactively, but held they must act upon knowledge of infringement. 

This limits liability on paper, but dominant platforms still adopt aggressive automated  removal in practice.

4.3 Prajwala v. Union of India (2018 2 SCC 722) 

The Court ordered intermediaries to remove rape videos and similar content, reflecting  judicial support for takedown accountability. 

Again, the focus was harm-removal, not dominance-moderation incentives.

4.4 Constitutional Principles Courts Could Extend 

Indian courts increasingly apply: 

  • Proportionality (Puttaswamy, 2017), 
  • Fairness against arbitrariness
  • Procedural reasonableness
  • Impact-based rights analysis

If these principles are extended to dominant intermediaries, courts could recognize:

1. Moderation = speech restriction, when infrastructure-scale platforms enforce it,

2. Dominance = limited exit, undermining assumptions of contractual voluntariness,

3. Opacity = arbitrariness, triggering constitutional fairness concerns,

4. Automation at scale = high-impact censorship, requiring procedural safeguards. This extension is doctrinally logical, yet judicially absent so far. 

  1. Competition Law as a Missing Dimension in Speech Governance

5.1 Dominance Doctrine under Competition Act, 2002 

A platform is dominant if it can operate independently of competitive forces or influence  market participants. 

Characteristics include: 

  • Network effects
  • High switching cost
  • Data advantage,
  • User lock-in
  • Gatekeeping control

These conditions match India’s largest social platforms today. 

5.2 Gatekeeping = Market Access Control 

Competition law prevents dominant actors from blocking market access unfairly. Social platforms block speech-market access, not product-market access, but the underlying  principle is identical: control of participation in an ecosystem by a dominant gatekeeper

5.3 Over-Removal Incentive as a Market Failure 

Because liability risk scales with user base, dominant platforms adopt: 

  • “When in doubt, remove” policies, 
  • Advertiser-friendly suppression, 
  • Automated filtering without appeals, 
  • High false-positive takedowns. 

This creates: 

  • Market-enabled chilling effect
  • Private censorship driven by dominance, not legality, 
  • No independent oversight
  • No competition neutrality duty
  • No mandatory transparency audits

Thus, dominance causes speech distortion through compliance incentives, a phenomenon  neither competition law nor IT rules currently address explicitly. 

  1. Policy Critique: The Accountability–Neutrality Imbalance 

India’s regulatory model reflects three assumptions: 

  1. Intermediaries are neutral hosts, not active governors,
  2. Users have meaningful alternatives if content is removed, 
  3. Moderation policies are private contracts, not public infrastructure functions. All three assumptions fail under structural dominance because: 
  • Platforms amplify and suppress content algorithmically, 
  • Users cannot meaningfully switch due to network effects, 
  • And moderation suppresses speech at infrastructure-scale without procedural scrutiny. This gap produces a regulatory imbalance: 
  • High takedown pressure, but 
  • No neutrality duties, and 
  • No transparency benchmarks, leading to unchecked private speech governance
  1. A Four-Pillar Reform Framework for Dominant Intermediaries Pillar 1 — Ex-Ante Transparency 

Mandatory publication of: 

  • Content removal rates, 
  • AI false-positive rates, 
  • Shadow-ban disclosures, 
  • Moderation criteria logs, 
  • Algorithmic impact assessments. 

Pillar 2 — Competition-Linked Neutrality Duty 

Dominant platforms must: 

  • Avoid arbitrary or advertiser-only motivated suppression, 
  • Maintain viewpoint-neutral moderation for lawful speech,
  • And not leverage dominance to silence controversial but legal discourse.

Pillar 3 — Independent Moderation Audits 

A statutory body or accredited third-party should conduct: 

  • Quarterly moderation accuracy audits, 
  • Algorithmic bias evaluations, 
  • And procedural fairness assessments. 

Pillar 4 — Statutory Appeals & Review Timelines 

Users must have access to: 

  • Binding appeal timelines, 
  • Reasoned removal explanations, 
  • Independent review of moderation errors, 
  • And restoration duties for wrongful removal. 
  1. Hypothetical Illustration: Dominance-Driven Speech Suppression 

A national activist posts criticism of a regulatory policy. The speech is lawful, not  defamatory, not seditious, not inciting violence. However: 

  • The platform’s AI filter flags it due to mass reporting, 
  • The moderation team removes it within 24 hours to avoid scrutiny,
  • No reasoned explanation is provided, 
  • No appeal is available, 
  • The activist loses reach due to algorithmic downranking before removal,
  • And no alternative platform can match the original reach due to dominance lock-in. 

Result: 

Speech is suppressed without constitutional scrutiny, without neutrality, without transparency, and without appeal—a market-enabled chilling effect identical in outcome  to statutory overbreadth, but unreviewable due to its private origin. 

This demonstrates that dominance + moderation opacity = infrastructure-scale  censorship, requiring ex-ante regulation. 

  1. Conclusion (≈180 words) 

India’s intermediary regulation regime has evolved significantly to address digital harm,  misinformation, and platform accountability. However, it remains structurally incomplete because it fails to regulate platform market power as a speech-governance risk. Dominant  intermediaries today perform a regulatory function over expression, yet their moderation  decisions remain opaque, non-neutral, unreviewable, and unbenchmarked by statute. This  creates a constitutional blind spot where speech can be chilled not by law, but by market  dominance and liability-avoidance incentives, leading to systematic over-removal of lawful  expression. 

Global regulatory models, particularly the EU’s ex-ante gatekeeper obligations under the DMA  and DSA, demonstrate that dominance-linked neutrality and transparency are not merely  economic tools but democratic necessities for preserving speech ecosystems. India must  similarly adopt ex-ante transparency mandates, neutrality duties for dominant intermediaries,  independent audits, and enforceable user appeal rights. Without such reforms, India risks  replacing concerns of state censorship with a more pervasive and unrestrained form of  censorship—private, automated, and dominance-driven

  1. Algorithmic Moderation, Delegated Governance, and the Problem of Unchecked  Digital Discretion 

One of the most significant shifts in intermediary governance globally has been the migration  from human review to automated decision-making. India’s 2021 Rules mandate quick  takedowns but do not regulate the internal tools used to execute them. This regulatory silence  is critical, because the process of moderation is no longer reactive—it is predictive,  automated, and system-wide

Dominant platforms deploy algorithmic moderation for reasons including:

  • Scaling governance for billions of posts,
  • Filtering spam, explicit content, or mass-reported posts, 
  • Protecting brand partnerships and advertisers, 
  • And minimizing the risk of losing safe-harbor protection. 

However, algorithmic systems are not legally neutral entities. They operate on:

  • Pattern recognition, 
  • Keyword flagging, 
  • Probability scoring, 
  • And mass-reporting heuristics. 

This produces a unique form of censorship: not censorship by law, but censorship by  prediction

The constitutional concern here is not only removal, but invisible pre-removal  suppression. Many platforms engage in: 

  • Shadow-banning (content is not removed but hidden from feeds),
  • Downranking (algorithm reduces visibility without notifying user),
  • De-amplification (post is technically live but reach is throttled),
  • And mass-report triggered auto-flags

These measures are especially dangerous when deployed by dominant platforms because:

  1. They are invisible to users
  2. They lack reasoned orders
  3. They are non-appealable in most cases, 
  4. And they suppress speech at scale before any legal adjudication. 

Thus, the moderation discretion of platforms becomes a delegated regulatory power  without judicial review, creating a digital ecosystem where the platform—not the  Constitution—determines the outer limit of permissible discourse.

  1. Free Speech as a Digital Market Right: Intersecting Competition Law, Consumer  Choice, and Democratic Participation 

Competition law’s primary goal is preventing abuse of dominance in markets. But modern  digital platforms operate in a dual market

  1. The economic market (ads, data monetization, engagement), 
  2. And the speech market (visibility, amplification, participation). 

The second is rarely acknowledged by Indian regulation, yet it is the more constitutionally  relevant market today. 

Dominance in the speech market creates harms including: 

  • Suppression of controversial but lawful discourse
  • Manipulation of visibility instead of outright removal
  • Lack of consumer alternatives for equivalent reach, 
  • And moderation rules optimized for commercial safety, not constitutional  fairness

From a consumer perspective, users sign clickwrap agreements assuming platforms will host  speech fairly. But when dominance eliminates real alternatives, the contract becomes the only  avenue to participate in mass digital discourse—making moderation decisions a public impact governance function, not a purely private contractual enforcement. 

Thus, competition law must evolve to recognize that platform neutrality is not just an  economic fairness principle, but a democratic participation guarantee

India’s Competition Act currently provides ex-post remedies, but the nature of speech removal  requires ex-ante neutrality duties, because wrongful takedowns cannot be remedied  meaningfully after the moment of discourse has passed. 

  1. Regulatory Convergence: Why India Needs a Speech-Market Regulator, Not Just a  Takedown Regulator 

India’s digital regulation landscape is evolving in silos:

  • The IT Rules regulate takedowns and grievance officers
  • The DPDP Act, 2023 regulates personal data processing
  • And competition law regulates market abuse

But none of these regimes currently provide: 

  • A neutrality duty for dominant speech intermediaries
  • A statutory transparency requirement for platform-initiated content  suppression
  • Or an independent oversight mechanism to audit moderation error rates

This gap becomes more serious when seen in light of regulatory convergence in other  jurisdictions: 

EU Model 

The EU’s DMA and DSA create obligations for dominant platforms including:

  • Algorithmic transparency, 
  • Fairness duties, 
  • Non-discrimination requirements, 
  • Independent audit mechanisms, 
  • And user appeal rights. 

UK Model 

The UK Online Safety Act mandates: 

  • Risk assessments, 
  • Transparency reporting, 
  • And safety-by-design duties, 
  • Though neutrality duties remain partial. 

US Model

While Section 230 offers broad safe-harbor protection, recent reform debates highlight the  need for: 

  • Carve-outs for dominant platforms, 
  • Transparency duties, 
  • And reduction of automated over-removal incentives. 

India’s Gap 

India remains primarily focused on ex-post removal pressure, not ex-ante platform  neutrality or speech-market fairness

Thus, India requires a speech-market regulator or statutory reform creating:

  1. Moderation neutrality duties for dominant intermediaries
  2. Transparency reports for all platform-initiated content suppression,
  3. Independent audits of algorithmic moderation
  4. And binding appeal timelines for wrongful removals

Without this, digital discretion remains unchecked, unreviewed, and dominance driven—producing a speech ecosystem governed by corporate risk metrics, not constitutional  doctrine. 

  1. Information Technology Act, 2000, § 79, No. 21, Acts of Parliament, 2000 (India).
  2. INDIA CONST. art. 19, cl. 1(a). 
  3. INDIA CONST. art. 19, cl. 2. 
  4. Shreya Singhal v. Union of India, (2015) 5 S.C.C. 1 (India). 
  5. Information Technology (Intermediary Guidelines and Digital Media Ethics Code)  Rules, 2021, G.S.R. 139(E) (India). 
  6. MySpace Inc. v. Super Cassettes Indus. Ltd., 2017 S.C.C. OnLine Del 6382 (India). 7. Prajwala v. Union of India, (2018) 2 S.C.C. 722 (India). 
  7. Competition Act, 2002, § 4, No. 12, Acts of Parliament, 2003 (India). 9. Regulation (EU) 2022/1925, of the European Parliament and of the Council of 14  September 2022 on Contestable and Fair Markets in the Digital Sector (Digital  Markets Act), 2022 O.J. (L 265) 1.
  8. Regulation (EU) 2022/2065, of the European Parliament and of the Council of 19  October 2022 on a Single Market for Digital Services (Digital Services Act), 2022  O.J. (L 277) 1. 
  9. MATHIAS KETTEMANN, THE NORMATIVE ORDER OF THE INTERNET 102– 118 (Oxford Univ. Press 2020). 
  10. ANDREW MURRAY, INFORMATION TECHNOLOGY LAW: THE LAW AND  SOCIETY 233–240 (4th ed., Oxford Univ. Press 2019). 
  11. Anupam Chander, The Coming North American Digital Markets Regulation Debate,  98 WASH. L. REV. 1, 23–41 (2023). 
  12. Arun K. Thiruvengadam, Constitutionalism and Digital Governance in India, 12  INDIAN J. CONST. L. 45, 50–72 (2024). 
  13. Apar Gupta, Intermediary Liability and the Over-Compliance Risk in India, 7  N.L.U.D. J. LEGAL STUD. 112, 115–132 (2022). 
  14. TARLETON GILLESPIE, CUSTODIANS OF THE INTERNET: PLATFORMS,  CONTENT MODERATION, AND THE HIDDEN DECISIONS THAT SHAPE SOCIAL  MEDIA 54–67 (Yale Univ. Press 2018). 
  15. Niva Elkin-Koren, Contestability, Gatekeeping, and User Lock-In in Digital  Platforms, 41 J.L. & TECH. 221, 225–260 (2023). 
  16. Daphne Keller, Amplification and Its Discontents: Why Moderation Transparency  Matters, 13 HOOVER WORKING GRP. PAPER 2, 4–18 (2021). 
  17. SHOSHANA ZUBOFF, THE AGE OF SURVEILLANCE CAPITALISM 201–219  (PublicAffairs 2019). 
  18. K.S. Puttaswamy v. Union of India, (2017) 10 S.C.C. 1 (India). 
  19. Sabu Mathew George v. Union of India, (2018) 3 S.C.C. 229 (India). 22. O.E.C.D., Competition in Digital Markets and Its Impact on Democratic Discourse,  2024 Rep. 3, 7–32. 
  20. European Commission, Gatekeeper Impact Assessment Under the DMA, at 12–37  (2023). 
  21. U.K. Online Safety Act, 2023, c. 50, §§ 10–15 (U.K.). 
  22. Paul Bernal, Content Moderation and Constitutional Blind Spots, 5 MOD. L. REV.  84, 90–126 (2020). 
  23. OECD, Algorithmic Moderation and Platform Discretion: Policy Challenges, 2024  Rep. 5, 2–19.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top