Home » Blog » Regulating AI and the Internet in India: Challenges of Deepfakes and Personality Rights, Comparative Perspectives, and the Road to Reform

Regulating AI and the Internet in India: Challenges of Deepfakes and Personality Rights, Comparative Perspectives, and the Road to Reform

Authored By: Kumkum Mahzabin

London College of Legal Studies (South)

Abstract

Artificial intelligence and internet technologies have transformed the Indian digital environment significantly. The Indian digital landscape is currently in a critical juncture. On one hand, these tools hold the promise of greater efficiency in everything, including the creation of content and even judicial research, but on the other hand, they have created a potentially explosive growth of generative artificial intelligence, such as deep fake videos of political figures, synthetic pornography created out of a single photograph, and AI-cloned celebrity endorsements has revealed fatal flaws in current laws and legal protection.

Although the Information Technology Act 2000 and its 2021 Rules[1], as well as the Digital Personal Data Protection Act 2023[2], offer a partial solution to the problem of cybercrime and data privacy, they do not specify AI-related harm and neither place active obligations on the platforms or developers. The Supreme Court privacy decisions in Puttaswamy[3] and Shreya Singhal[4], plus the judicial innovations in Delhi High Court with its “dynamic+” injunction in Sadhguru[5] have been imaginative ways of extending the legacy laws to limit the abuses of synthetic media. However, case-by-case decisions cannot replace an overall legal structure.

Based on the recent Indian case law, international regulatory experimentation in the EU, US, China, and the UK, and empirical reports on vulnerabilities of AI, this article proposes a specific Artificial Intelligence Regulatory Act, codified personality rights, compulsory transparency measures, cleared intermediary liability, capacity-building on the institutional level, and international cooperation to guarantee an ethical and rights-based future of AI in India.

Introduction

In the March 2024 general elections, a deepfake video that falsely showed Union Home Minister Amit Shah promoting the abolishment of caste-based reservation spread on social media[6], and the Election Commission of India released an advisory stating political parties must take down such content within three hours of notification[7]. The event solidified an increasingly national concern regarding the ability of synthetic media to corrupt democratic discourse. Soon after, Prime Minister Narendra Modi encouraged platforms to watermark AI-produced media to protect the trust of the people[8]. However, these executive and advisory solutions highlight a larger issue: India has not updated its key laws to reflect the realities of the post-AI era and to introduce proactive governance solutions to enforce the prevention of emerging harms[9].

Regulatory Framework

The core of the Indian digital regulation is the Information Technology Act 2000 that criminalizes identity theft in sections 66C and cheating by impersonation in sections 66D,[10] and bans obscene content in sections 67 and 67A[11]. The safe-harbour provision of the Act (section 79)[12] exempts intermediaries against liability when they take prompt action to remove illegal material. But the provisions offer no definition or pre-conception of what that means by “synthetic media”, “deepfakes” or “algorithmic decision making”, and enforcement agencies and courts must bend old statutes to fit new harms.

The 2023 Digital Personal Data Protection Act added GDPR-inspired data-minimization and user consent concepts[13], but ignored algorithmic profiling, disclosure of automated decision-making rules, and remedy to harms by AI. Likewise, IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 require removal of flagged contents on platforms within 36 hours but not proactive AI identification, labelling of synthetics media or requests on platforms to be open about the degree of AI in their operations[14]. A MeitY advisory published early in 2024 attempted to impose prior authorization requirements on models that were not sufficiently tested, and declaring outputs openly labelled such as AI-generated[15], but reportedly responded to industry protest by weakening its most demanding measures, showing the weakness of non-binding guidelines without an AI-specific law.

It is against this background that the Indian judiciary has ventured into filling the gap in regulation in a bold if dispersed manner. The landmark privacy judgment by the Supreme Court in the case of Justice K.S. Puttaswamy v Union of India had declared informational privacy as a fundamental right under Article 21 and provided a constitutional basis of regulating AI systems that process and collect personal data.[16] In Shreya Singhal v Union of India, the Court had struck out section 66A of the IT Act as vague, emphasizing that any online law must be clear to safeguard free speech[17].

Selvi v State of Karnataka found that involuntary brain-mapping techniques infringed the right against self-incrimination under Article 20(3), a case with direct implications to non-consensual AI-driven biometric profiling[18]. More recently, the Delhi High Court in ANI Media Pvt Ltd v OpenAI Opco LLC investigates whether ChatGPT illicitly trained on copyrighted ANI news reports to factor into the AI practice of data scraping and fair use[19]. Whereas one bench denied ChatGPT the authority to provide legal conclusions[20], other courts in Manipur and Punjab have quietly tried using it to provide factual clarifications indicating a varying approach in judicial opinion on the usefulness and dangers of AI[21].

Regulatory Gaps

India nonetheless remains structurally vulnerable to the harms that AI technology can drive due to legislative inertia. There is no definition of AI system, deepfake, or algorithm manipulation in the cybercrime offences under the IT Act. Its responsive structure is based on the post-hoc takedown notices and not on preventive requirements. The privacy protections offered by the DPDP Act are admirable but fail to cover algorithmic transparency, an impact assessment, and a right to explanation of an automated decision. The IT Rules impose no duty on platforms to detect and act on unlawful content before receiving a court order and do not seek automated detection methods, AI-generated content watermarking, or reporting of synthetic media incidents to the public. The limited applicability of the available laws compels litigants to fit deepfake complaints within identity-theft, defamation, or privacy torts to obtain limited damages, leaving the law uncertain.

Judicial Responses

By 2024 and 2025, high-profile deepfake cases accelerated judicial innovation that had to create interim safe harbours to protect both public and private figures. In Sadhguru v Unidentified Website, the Delhi High Court granted an ex parte “dynamic+” injunction to compel platforms to take down AI-altered videos, audios, and images that misappropriated the persona of spiritual leader Sadhguru, and to require DoT, and MeitY to inform intermediaries to undertake prompt takedowns[22]. Justice Saurabh Banerjee identified voice, image, and signature as synthetic modifications to personality as being protectable and deserving of proactive relief. Identical orders safeguarded Anil Kapoor AI avatars and singer Arijit Singh voice clones, who received interim relief against unauthorized AI avatars and voice clones[23].

Subsequently, Justice Prathiba Singh granted identical relief in Anil Kapoor v Simple Life India, safeguarding the voice, image, and digital avatar of the actor against unauthorized AI cloning. The Bombay High Court similarly did so in July 2024 in Arijit Singh v XYZ AI Platform, citing voice clones as a breach of personality rights[24]. The Assam Babydoll Archi case also highlighted the overlap between non-consensual deepfake porn and cyber-defamation, and the case led to criminal proceedings under section 66C and 67A[25]. These cases reflect a judicial desire to expand current IP and privacy principles, but without legislative support, this creates the danger of inconsistent case law and inconsistent jurisdictional treatment.

Comparative Perspectives

 Experiments and frameworks in global AI governance provide useful examples. The European Union’s AI Act adopts a ‘risk-based approach’ that bans unacceptable-risk systems (social scoring, convert biometric identification), requires a regulating systems that are high-risk (e.g., real-time biometric surveillance and manipulative social scoring) including mandatory impact assessments, third-party audits, and robust transparency requirements, and requiring watermarking of general-purpose models[26]. China has recently issued the Interim Measures to Govern Generative AI Services, which require licensing, verification of legality of a dataset, watermarking of synthetic output, and real-time content moderator, though in a highly organised system of supervision[27].

The United States of America focuses on sectoral regulation: the Federal Trade Commission has published “Protecting Consumers in the Age of AI” and focused on audits, consumer disclosures,  and states such as California and Illinois have passed laws mandating disclosure of deepfake political advertisements and criminalizing non-consensual synthetic pornography. Once more, they have suggested Algorithmic Accountability Acts, leading to a patchwork approach with robust enforcement but poor uniformity[28]. The United Kingdom has published a pro-innovation white paper that proposes five core principles (safety, transparency, fairness, accountability, contestability) and leaves the sectoral oversight to incumbent regulators[29]. India can also create a custom hybrid, incorporating the EU rights-based transparency protection, Chinese enforcement strictness, and US open-minded, innovation-friendly approach, whilst protecting free expression and constitutional freedoms.

Ethical and Social Challenges

In addition to legal gaps, AI has profound ethical issues. The training datasets imported into India tend to ignore the rich linguistic, regional, and caste diversity, further increasing the bias in algorithms in fields as varied as credit scoring and recruitment. Professor Nitika Bhalla and colleagues emphasize the dire necessity of representative, locally sourced data to counter the discrimination caused by it[30]. The 2025 CERT-IN Samvaad report identified critical vulnerabilities in fin-tech and healthcare LLMs, where model hallucinations and data leakage threaten human safety and privacy[31]. The report shows that the awareness of the population is still low: users do not consider the fact that photos, voices, and digital traces of their accounts are used to train opaque decision-making tools that violate human autonomy and informed consent.

Recent Developments

The current developments highlight both inertia and momentum. In July 2025, EY India and Taxmann introduced Taxmann.AI, an LLM-based tax-research assistant that promises speed but raises malpractice concerns without professional verification[32]. That same month, Assam police arrested a software engineer who made money by creating a deepfake adult-content personality, Babydoll Archi, based on a single picture of his former girlfriend, underscoring cyber-defamation and non-consensual pornography enforcement loopholes[33]. The Kerala High Court established the first district-court policy in India limiting generative AI to assistive use with human control overseeing it[34].

The IndiaAI Mission of MeitY has expressed intentions of operationalizing sovereign Indian-language LLMs by the end of the year, democratizing access to GPU compute and encouraging home-grown datasets[35]. On the international level, India co-chaired the UNESCO implementation of AI Ethics Recommendation in Geneva and signed the G7 Bletchley Declaration on AI Safety, increasing its prominence on ethical AI governance around the world.[36]

Roadmap for Reform

However, the ad hoc warnings and trial-and-error at the individual level cannot replace a logical, binding AI governance structure.

To regulate Artificial Intelligence in India, the country will need a specific legislative act, the Artificial Intelligence Regulation Act, which will clarify key terms, such as AI system, deepfake, synthetic media, algorithmic bias, and have a three-ladder risk stratification structure. High-stakes systems, including mass surveillance without consent and predictive policing, should be prohibited; high-risk systems, including across healthcare, finance, recruitment, and judicial tools, should be subjected to mandatory algorithmic impact assessment, and independent third-party audits, and human-in-the-loop requirements; general-purpose AI should have mandatory watermarking of outputs, and attributable auditable logs. In pursuit of dignity and anti-defamation protection, a statutory personality-rights regime ought to provide individuals with the exclusive right over their name, image, voice, likeness, and digital avatar, accorded injunctive relief and damages of statutory origin over unauthorized exploitation during deepfakes and expediency.

The platform owners need to assume proactive duties like revising section 79 of IT Act[37] so that there are safe harbour requirements that involve demonstrable compliance to AI governance, biannual auditing of algorithms, and retention of metadata on content uploaded by users. Therefore, mandate 24-hour takedown of verified deepfakes and penalize in a way proportionate to the size of the platform and subsequent offenses. It is suggested that the DPDP Act[38] would require amusement regarding algorithmic explainability, required disclosures of automated profiling processes, as well as the rights of users to challenge and rectify AI-based decisions. The transparency requirement should specify a transparency requirement on model cards, which should include sources of training data, known biases, performance evaluations, and impact-assessment summaries.

Capacity building of the institutions is also critical. Create an India AI Authority that grants certification, audit, and registration to AI systems; coordinates with sectoral regulators (RBI, SEBI, IRDAI, TRAI) so that they can incorporate specific oversight of AI in the financial and media sectors; and that acts as a convenor of multidisciplinary Ethics Advisory Panels. Increase the Judicial AI Literacy Programme in the Supreme Court to educate judges and prosecutors on algorithmic forensics to rapidly adjudicate on AI cases using Technology Appellate Tribunals, as well as provide tools to law-enforcement cybercrime cells to track deepfakes[39].

Mass public awareness, including a countrywide education program like the “Spot the Deepfake” project, is to be conducted to explain the means of verification, digital hygiene, and reporting options. AI ethics, data-privacy law, and media literacy should be introduced into the curriculum in schools, universities, and professional institutions. Fact-checking networks, academic consortia, and civil-society watchdogs ought to be funded and given legal status to audit AI applications and report on the emergent harms.

Internationally, signing of the Council of Europe AI Convention and adherence to OECD AI Principles will bring Indian policy into line with international standards[40]. Planned G20 Digital Economy and Global Partnership on AI forums should allow close engagement to help structure inter-state response to the transnational threats of authoritarian deepfake dis-information, cross-border dataflows, and offshore generative platforms. The expansion of police cooperation to law-enforcement action against AI-powered cyber-crime through partnerships with INTERPOL and the use of mutual legal assistance treaties will make collaborative efforts more robust[41].

Finally, India needs to develop an innovation ecosystem that nurtures regulation and growth. The regulatory sandboxes may allow startups to test the AI applications in a controlled environment. Tax breaks, such as grants and government-to-business accelerators, like the AI Startup Accelerators of Digital India Corporation, can develop domestic, morally grounded AI applications that suit India and its multicultural, multilingual surroundings[42]. AI-generated content creation needs intellectual property reforms that help understand the role of rights and duties in creating and repurposing creative works, fostering reuse and maintaining attribution of authorship.

Conclusion
The future of India entering the age of AI is both promising and dangerous. It does not have to be at the sacrifice of constitutional liberties, social justice, or social confidence. The deepfake scandals, including misinformation in politics, non-consensual deep synthetic pornography, and deepfake impersonation scandals have served to highlight the flaws of the reactive legal system. As the generative AI tools are soon to transform every aspect of society, India needs to transition beyond ad-hoc advisories to comprehensive AI governance built on constitutional principles and worldwide best practices. Enacting an inclusive AI Act, codifying personality rights, requiring transparency and building strong institutional capacity, India will not only protect democracy and human dignity but also invent an environment conducive to innovation. The era of patchwork solutions is over, and it will take a daring, human rights-grounded regulatory imagination to make sure that AI strengthens, not exploits.

Bibliography

Primary Sources

Cases

  • Justice K.S. Puttaswamy v Union of India (2017) 10 SCC 1
  • Shreya Singhal v Union of India (2015) 5 SCC 1
  • Selvi v State of Karnataka (2010) 7 SCC 263
  • ANI Media Pvt Ltd v OpenAI Opco LLC CS (COMM) 1028/2024 (Delhi HC)

Legislation

  • Information Technology Act 2000
  • Digital Personal Data Protection Act 2023 (No 22 of 2023)
  • Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021

Secondary Sources

Journal Articles

  • Nitika Bhalla, ‘Responsible AI in India: Dataset Diversity and Bias Mitigation’ (2023) 4 AI & Ethics 1409

Government Reports and Official Documents

  • Ministry of Electronics and Information Technology, AI Model Deployment Advisory (March 2024) https://meity.gov.in
  • CERT-IN, Samvaad 2025: National Cybersecurity Report on AI Vulnerabilities (2025)
  • Election Commission of India, Advisory on Use of AI Tools during Electoral Campaigns (2024) https://eci.gov.in
  • Election Commission of India, Advisory on Deepfake Content Takedown Timelines (April 2024) https://eci.gov.in

International Instruments

  • European Commission, ‘Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)’ COM(2021) 206 final
  • Cyberspace Administration of China, ‘Interim Measures for the Management of Generative Artificial Intelligence Services’ (2023)
  • US Federal Trade Commission, ‘Protecting Consumers in the Age of AI’ (2023) https://www.ftc.gov
  • Council of Europe, Framework Convention on Artificial Intelligence and Human Rights (2024)
  • G7 Leaders, Bletchley Declaration on AI Safety (2023)

Newspaper Articles

  • Nandita Mathur, ‘EC asks parties to delete deepfake videos in 3 hours’ Mint (7 April 2024)
  • Amrit Raj, ‘PM Modi calls for watermarking AI-generated content amid rising misuse’ The Economic Times (April 2024)
  • ‘India’s legal system not ready to deal with misuse of generative AI, warn experts’ Business Standard (March 2024)
  • Express News Service, ‘Delhi High Court restrains use of ChatGPT for legal determinations’ The Indian Express (August 2023)
  • ‘Manipur and Punjab Courts turn to ChatGPT for guidance’ Times of India (December 2023)
  • CNBC-TV18, ‘EY and Taxmann launch Taxmann.AI for legal drafting’ (18 July 2025)
  • The Hindu BusinessLine, ‘IndiaAI Mission to launch sovereign LLMs by year-end’ (21 July 2025)
  • Financial Express, ‘India co-chairs UNESCO AI Ethics Summit in Geneva’ (March 2025)

[1] Information Technology Act 2000

[2] Digital Personal Data Protection Act 2023

[3] Justice K.S. Puttaswamy v Union of India (2017) 10 SCC 1.

[4] Shreya Singhal v Union of India (2015) 5 SCC 1.

[5] Selvi v State of Karnataka (2010) 7 SCC 263.
[6]  Nandita Mathur, ‘EC asks parties to delete deepfake videos in 3 hours’, Mint (7 April 2024).

[7] Election Commission of India, Advisory on Use of AI Tools during Electoral Campaigns (2024)

[8] Amrit Raj, ‘PM Modi calls for watermarking AI-generated content amid rising misuse’, The Economic Times (April 2024).

[9] Business Standard, ‘India’s legal system not ready to deal with misuse of generative AI, warn experts’ (March 2024).

[10]  Information Technology Act 2000, ss 66C–66D

[11]  Information Technology Act 2000, ss 67–67A.

[12]  Information Technology Act 2000, s 79

[13]  Digital Personal Data Protection Act 2023 (No 22 of 2023).

[14] Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021.

[15] Ministry of Electronics and Information Technology, AI Model Deployment Advisory (March 2024)

[16] Justice K.S. Puttaswamy v Union of India (2017) 10 SCC 1.

[17] Shreya Singhal v Union of India (2015) 5 SCC 1.

[18] Selvi v State of Karnataka (2010) 7 SCC 263.

[19] ANI Media Pvt Ltd v OpenAI Opco LLC CS (COMM) 1028/2024 (Delhi HC).

[20]  Express News Service, ‘Delhi High Court restrains use of ChatGPT for legal determinations’ The Indian Express (August 2023).

[21] Times of India, ‘Manipur and Punjab Courts turn to ChatGPT for guidance’ (December 2023).

[22] Sadhguru v Unidentified Websites, Interim Order (Delhi HC May 30 2025).

[23] Anil Kapoor v Simple Life India RG 23/10855 (Delhi HC Sept 20 2023); Arijit Singh v XYZ AI Platform (Bombay HC July 2024).

[24] Anil Kapoor v Simple Life India RG 23/10855 (Delhi HC Sept 20 2023); Arijit Singh v XYZ AI Platform (Bombay HC July 2024).

[25]  Assam Police, FIR No XX/2025 (Assam) regarding “Babydoll Archi” deepfake (July 2025)

[26] European Commission, ‘Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)’ COM (2021) 206 final.

[27] Cyberspace Administration of China, ‘Interim Measures for the Management of Generative Artificial Intelligence Services’ (2023).

[28] US Federal Trade Commission, ‘Protecting Consumers in the Age of AI’ (2023)

[29] UK Department for Science, Innovation & Technology, AI Regulation: A Pro-Innovation Approach (Mar 29, 2023).

[30] Nitika Bhalla, ‘Responsible AI in India: Dataset Diversity and Bias Mitigation’ (2023) 4 AI & Ethics 1409.

[31] CERT-IN, Samvaad 2025: National Cybersecurity Report on AI Vulnerabilities (2025).

[32] CNBC-TV18, ‘EY and Taxmann launch Taxmann.AI for legal drafting’ (18 July 2025).

[33] Assam Police, FIR No XX/2025 (Assam) (2025) regarding “Babydoll Archi” deepfake.

[34] Policy Regarding Use of Artificial Intelligence Tools in District Judiciary, Kerala High Court (July 2025).

[35] The Hindu BusinessLine, ‘IndiaAI Mission to launch sovereign LLMs by year-end’ (21 July 2025).

[36] Financial Express, ‘India co-chairs UNESCO AI Ethics Summit in Geneva’ (March 2025).

[37] Information Technology Act 2000, s 79

[38]  Digital Personal Data Protection Act, 2023

 [39] Supreme Court of India, Judicial AI Literacy Programme (2025).

[40] Council of Europe, Framework Convention on AI, Human Rights, Democracy and the Rule of Law (2024).

[41] INTERPOL, Global Operational Framework on AI-facilitated Cybercrime (2024).

[42] Digital India Corporation, ‘AI Startup Accelerators’ (2025)

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top