Home » Blog » Deepfakes and the Law: Addressing Legal Accountability for AI-Generated Misinformation

Deepfakes and the Law: Addressing Legal Accountability for AI-Generated Misinformation

Authored By: Kadambari Manojkumar Sonawane

Kadambari Manojkumar Sonawane

Abstract  

“Who is legally responsible for the damage caused by AI-generated Misinformation – the  developer, the user or someone else when the defects are unpredictable?”  

Deepfake technology is becoming increasingly sophisticated, creating synthetic audio, video,  and images that are nearly indistinguishable from the real thing. Advanced AI is concerning  because it’s increasingly difficult to distinguish between what’s real and what’s fake, leading  to ethical and legal issues. The most immediate danger is to our legal system. The legal  landscape surrounding deepfakes is complex and evolving rapidly. Courts are grappling with  questions of copyright infringement, defamation, and the implications of deepfakes on personal  privacy and dignity. In court, evidence must be credible and admissible. Yet, deepfakes  introduce doubt about the authenticity of any digital proof, which could seriously undermine  the fairness of trials and investigations. Beyond the courtroom, deepfakes pose a direct threat  to personal reputation and safety. Individuals are becoming targets of manipulated content  designed to defame, blackmail, or mislead the public. The damage caused by these highly  realistic fakes can be swift, severe, and often impossible to reverse fully. We are now facing a  complex challenge where technological advancement directly clashes with the fundamental  need for truth and trust. This necessitates a careful balance between embracing technological  innovation and implementing effective measures to counter the potential misuse of deepfakes. 

  1. Introduction 

AI’s Dark Side: Why Deepfakes Are a National Crisis? 

Imagine an AI that can perfectly replicate your voice or face and use it to damage your  reputation or steal from your bank account. This isn’t science fiction; it’s the reality of deepfakes  in India. The incredible power of AI to create hyper-realistic fake media known as deepfakes is now a major challenge in India. This isn’t just about fun digital manipulation; it’s about  criminal use. Malicious players are using this technology to spread toxic lies, execute complex  financial scams, invade personal lives, and potentially sabotage the democratic process. The  constant stream of deepfakes targeting celebrities and politicians proves one thing: India’s legal  and regulatory system is at a breaking point. The technology behind deepfakes was first  designed for creative work, like making movies and dubbing languages. Unfortunately, it’s now  primarily used for spreading misinformation, defamation, and serious cybercrime. As digital  evidence becomes increasingly central in both civil and criminal litigation, the authenticity of  such material is critical. 

The journey to deepfakes started in the 1990s when researchers first used basic computer  graphics (CGI) to make realistic human images. The real breakthrough came in 2014 with the  invention of Generative Adversarial Networks (GANs), the core AI technology used today. 

The term “deepfake” was first used in 2017 on Reddit to describe face-swapped videos. It  quickly went mainstream in 2018, and its use exploded—the number of videos online nearly  doubled in 2019. Since 2021, new AI tools like DALL-E have expanded the threat beyond just videos to creating any image from text. As a result, deepfake incidents have skyrocketed,  increasing by 245% between 2023 and 2024. 

The Indian legal system has laws like the Information Technology Act, 2000, and parts of the  Indian Penal Code, 1860. However, it doesn’t have specific laws designed to handle the  complex problems caused by synthetic media (like deepfakes). Numerous individuals have  been targeted through non-consensual deepfake political impersonations, pornography and  falsified interviews or statements circulated on social media, leading to irreparable reputational  harm and emotional trauma. 

IT Minister Ashwini Vaishnaw said that the government is responding to many calls to fight  synthetic content and deepfakes that spread false information. He explained that in Parliament  and other places, people have demanded action because deepfakes are hurting society. These  fake images often use the likeness of prominent people, causing harm to their personal lives  and privacy, and spreading misconceptions among the public. 

Amit Shah Deepfake Case (2024, Delhi): Two individuals were arrested for a manipulated  video of the Home Minister falsely claiming changes to reservation policies. Charged under IT  Act Sections 66C and 66D, and IPC Sections 153A (promoting enmity) and 468, the case  prompted the ECI to issue stricter guidelines on AI misuse. 

The Delhi High Court on Friday (September 19, 2025) restrained several entities from  misusing filmmaker Karan Johar’s name, image, voice, and persona through technological  tools such as Artificial Intelligence (AI), Machine Learning, Deepfakes, face morphing and  GIFs for commercial purposes. 

The case comes after similar orders were passed recently by the court protecting the personality  and publicity rights of actor Aishwarya Rai Bachchan and Abhishek Bachchan. The court  acknowledged that the misuse of deepfake technology can cause serious harm to an individual’s  reputation and constitutes a violation of privacy and human dignity guaranteed under Article  21 of the Constitution. The court ordered Google to remove AI-generated videos, holding that  the platform was responsible for preventing the recurrence of such content. 

  1. Understanding Deepfakes 

2.1 Definition 

The name comes from combining “deep learning” and “fake”. It is created using powerful  Artificial Intelligence (AI) technology, specifically a technique called deep learning. The AI  analyses lots of existing real footage or audio of a person and then uses that data to create a  new, fabricated piece of media where the person appears to be saying or doing something they  never actually said or did.  

Like this: “Beyond the Filter: AI’s Dark Side Exposed as Girls Reveal Photo  Privacy Risks on Gemini AI. (AI Saree Trend) 

Such Images raise concerns about data privacy, security, and the potential misuse of personal  images. 

“Even Google emphasises the potential for data privacy and photo misuse with AI saree  trends, cautioning users to remember their responsibility.”

Main Risks of Feeding Photos to AI Generators 

  1. Deepfake creation: Every photo you upload can be used by AI to generate convincing  fakes-fueling identity theft, harassment, and instant reputational ruin. Trends like  Gemini and nano banana are simply providing the perfect training data. 
  2. Your Likeness is Ripe for Abuse: That photo upload isn’t just a picture; it’s a data  goldmine. AI systems mine your image for personal details, then use your face to  generate unconsented deepfakes, inserting you into explicit or false scenarios. You lose  all agency over your digital self. 
  3. Commercial Use Without Consent: AI platforms may use your uploaded photos for  commercial may use your uploaded photos for commercial purposes without your  knowledge or consent. This could include generating ads or training datasets for other  AI models. 

2.2 Technology Behind Deepfakes  

The development of deepfakes is becoming easier, more accurate and more prevalent as the  following technologies are developed and enhanced: 

GAN neural network technology uses generator and discriminator algorithms to develop all  deepfake content. Convolutional neural networks analyse patterns in visual data. CNNs are  used for facial recognition and movement tracking. Autoencoders are a neural network  technology that identifies the relevant attributes of a target, such as facial expressions and body  movements, and then imposes these attributes onto the source video. Natural language  processing is used to create deepfake audio. NLP algorithms analyse the attributes of a target’s  speech and then generate original text using those attributes. High-performance computing is  a type of computing that provides the significant computing power deepfakes require. Video  editing software isn’t always AI-based, but it frequently integrates AI technologies to refine  outputs and make adjustments that improve realism. 

Technologies Involved in Deepfakes 

Generative Adversarial Networks (GANs): Utilise two machine learning networks,  namely, the Generator and the Adversary. 

Input Data 

Generator 

Trained model that  creates new  

examples of data  exhibiting original  data. 

Adversary 

Dectects flaws or  fakes, determining  those that do not  exhibit some  

characteristic of the  original data.

Fed the fakes back to the first network to improve the process of creating new data. 

  1. Legal Framework 

The core legal effort focuses on making existing digital laws applicable to the new challenges  posed by AI-generated misinformation. 

3.1 Information Technology Act, 2000 (IT Act) 

This is the parent law. It provides the basis to prosecute the basis to prosecute cybercrimes,  including 

  • Identity theft (Section 66C)  
  • Cheating by personation (Section 66D) 
  • Violation of privacy (Section 66E) 
  • Publishing obscene content (Section 67/67A) 

3.2 IT Rules, 2021 (Intermediary Guidance) 

This puts accountability on social media platforms (like Facebook, YouTube, X). The rules  require platforms to exercise due diligence and quickly remove content that misleads or  deceives (including through deepfakes). 

3.3 Grievance Appellate Committees (GACs) 

If a user or victim feels a platform hasn’t acted correctly (e.g., they didn’t take down a fake  video fast enough), they can appeal to these special committees online. This gives victims a  powerful, quick legal avenue for justice. 

3.4 Recent Amendments to IT Rules, 2021 

The government has proposed/notified amendments specifically defining “synthetically  generated information” and mandating platforms to label AI content, which directly addresses  the defects of the older, technology-neutral laws. 

3.5 Bhartiya Nyaya Sanhita (BNS), 2023 

  • Section 353: Penalises the creation and spreading of false or misleading statements that  can cause public mischief or fear.  
  • Section 111: Addresses organised cybercrimes, which can include those involving  deepfakes.  
  • Section 319: Deals with cheating by personation, which can be used for deepfake related fraud.  
  • Section 336: Criminalises electronic forgery.  
  • Section 356: Extends defamation laws to synthetic media. 

3.6. Digital Personal Data Protection (DPDP) Act, 2023 

This act requires consent for processing personal data. Deepfakes created using personal data  without consent can lead to penalties of up to Rs. 250 crore. 

3.7. Indian Penal Code (IPC), 1860 

Sections 499 and 500: Can be invoked in cases where deepfakes lead to defamation.

The BNS includes new sections specifically aimed at holding people legally accountable for  digital deception: 

Section 169 (Replaces IPC Section 469): This section punishes you if you create a fake  document or digital file (like a deepfake) with the specific goal of harming someone’s  reputation. 

Section 354 (Replaces IPC Section 500): This section deals with defamation (damaging  someone’s good name) when it is caused by AI-generated content. This ensures the creators of  harmful deepfakes or texts are held responsible. 

Section 357 (Replaces IPC Section 505): This section makes it illegal to spread false or fake  news that could lead to public chaos or disturbances. It targets the intentional spread of  misinformation that causes real-world problems. 

3.8. Proposed Digital India Act (DIA): 

The DIA, still in the consultation phase as of July 2025, aims to replace the IT Act and address  emerging technologies like AI. It is expected to include provisions criminalising malicious  deepfakes, imposing fines on creators and platforms, and mandating transparency for AI generated content. Public consultations have emphasised technology-neutral rules to ensure  future relevance. 

  1. Judicial Interpretation: 

Legal frameworks should precede technological advancements to prevent misuse. India’s  Digital Personal Data Protection Act, 2023, introduced in response to privacy concerns,  exemplifies this proactive approach. Regulations surrounding AI should be legislated in  advance to define liabilities for AI-generated harm, mandate ethical AI development, and  ensure compliance with constitutional rights. The absence of AI-specific legislation results in  gaps in accountability, necessitating judicial interpretation to address regulatory deficiencies. 

Indian courts have creatively applied existing laws to address deepfakes. The Delhi High  Court’s ruling in Anil Kapoor’s case (2023). The court granted an ex parte injunction  restraining the defendants from using deepfake videos to exploit Kapoor’s Likeness for  commercial purposes. recognised personality rights as a protectable interest against deepfake  misuse, setting a precedent for civil remedies. Courts have also granted injunctions and  damages under tort law for defamation and privacy violations. The court reasoned that an  individual has an inherent and enforceable right to control, protect, and commercially exploit  their own personality. Unauthorised use of a persona through deepfakes was deemed a violation  of both personality and privacy rights under the Constitution. 

Rashmika Mandanna Deepfake Case (2023, Delhi): Four individuals were arrested under  Sections 66D and 66E of the IT Act for creating and disseminating a non-consensual deepfake  video of the actress. The case highlighted the applicability of cybercrime laws but exposed  delays in tracing perpetrators across platforms.

Vineeta Singh v. Unknown (2024, Delhi): ‘Shark Tank India’ Judge Vineeta Singh slams  Fake ‘Death News’; Shares Screenshots of Post Saying “Hard Day For India…’’’ 

The Shark Tank India Judge obtained an injunction against a deepfake video falsely endorsing  a health product, invoking privacy and defamation laws. The case highlighted the growing  misuse of deepfakes in commercial fraud. 

  1. Election Laws: 

The Representation of the People Act,1951: This law is designed to keep elections fair. It  punishes people for making false statements about a candidate to damage their reputation and  affect the voting. This law does not specifically account for AI-generated misinformation. It  focuses on human-made statements. A court has to stretch the meaning of “false statement” to  cover a hyper-realistic, AI-generated video showing a candidate doing something they never  did. 

  1. The Hidden Threat to Justice: How Deepfakes Complicate Investigations  and Court Cases 

The flaws in India’s system have a direct and dangerous impact on the way police investigate  crimes and courts deliver justice. Here’s why: 

  • Making Digital Proof Unreliable: In today’s world, many important clues are digital  videos, photos, or audio recordings. But with deepfakes, it’s incredibly difficult for  investigators and courts to be sure if a digital piece of evidence is real or has been  tampered with. 
  • The Danger of Manipulated Audio-Visual Evidence: Imagine a courtroom where a  video shows a person confessing to a crime, but it’s fake. Or a recording seems to show  someone saying something that would completely change their case. This kind of  manipulation makes it nearly impossible to know what was really said or done. 
  • Deepfakes – A Direct Threat to Legal Evidence: When deepfakes can be so  convincing, the entire system of relying on digital evidence is put at risk. Courts might  start questioning every piece of digital evidence, slowing down trials and potentially  leading to wrong decisions in some cases. This is a significant challenge for the justice  system’s ability to adapt to new technologies. 
  1. International Legal Framework for Deepfakes and AI Misinformation 

Jurisdiction 

Primary  

Regulatory  

Approach

Key Legislative  Instruments &  

Focus

Accountability  

Mechanism

China

Mandatory  

Disclosure & State  Control. 

Focuses on the  creation and  distribution of “deep  Synthesis” 

technology.

Deep Synthesis  Provisions (2023):  Mandates real-name  verification for users,  watermarking/labelling  of all AI-generated  content, and requires  creators to obtain 

Direct Government  oversight: companies  must register deep  synthesis services with  the government and  report illegal consent.



  

consent before using  others’ biometric data.

 

European Union  

(EU)

Comprehensive  

Risk-Based  

Regulation  

(Proactive). Focuses  on transparency and  harm prevention  across all AI  systems.

EU AI Act  (Proposed/Enacted):  Mandates transparency  obligation for  deepfakes. AI generated images, audio, or video must be  clearly  

disclosed/labelled so  users know the content  is synthetic.

Digital Services Act  (DSA): Holds large  online platforms (Very  Large Online Platforms  – VLOPs) directly  accountable for failing  to quickly detect and  remove illegal  deepfakes and  misinformation.

United States (US)

Targeted State Laws  & Existing Laws  (Reactive). Lacks a  single federal AI  law; relies on  existing  

criminal/tort laws  and specific state  legislation.

State Laws (e.g.,  California, Texas):  Focus on specific  harms like banning  non-consensual  

sexually explicit  deepfakes and  prohibiting deceptive  deepfakes during  elections (with specific  timeframes and  disclosure  

requirements).

Judicial/Civil Action:  Victims primarily rely  on civil lawsuits (tort  law, defamation) and  criminal prosecution  under existing laws, or  specific state-level  deepfake statutes.

United Kingdom  (UK)

Platform  

Accountability  

(Harm-Focused).  Places responsibility  on social media  companies to protect  users from illegal  and harmful content.

Online Safety Act  (OSA): Imposes a  “duty of care” on social  media platforms to  tackle illegal content,  including deepfakes  used for harassment,  threats, and non consensual images.  The law is focused on  platform system-level  failures.

OFCOM (Regulator):  OFCOM is empowered  to enforce the OSA and  impose severe fines on  platforms that fail to use  reasonable measures to  detect and remove  harmful deepfakes. Sharing of deepfake  Pornography by a  person makes them  liable to imprisonment  for up to 2 years.

Conclusion 

The accelerating sophistication of deepfake technology poses a complex, cross-cutting threat  to legal, ethical, and societal stability. This synthetic media erodes the fundamental credibility  of digital evidence and introduces an unprecedented challenge to judicial integrity by  fabricating seemingly authentic legal materials, thereby dangerously blurring the distinction  between truth and falsehood. While India possesses legislative frameworks like the Information  Technology Act, 2000, and the newly enacted Bhartiya Nyaya Sanhita, 2023 and Bhartiya  Sakshya Adhiniyam, 2023, these laws are often outdated in scope or fail to specifically address  the unique, AI-driven nature of deepfakes, which are frequently exploited for severe harms  such as political manipulation and character assassination. The inherent difficulties of  anonymity further compound the enforcement challenge, including the rapid cross-border  dissemination of content and a pervasive lack of technological preparedness among  enforcement agencies, which ultimately obstruct effective action and adequate victim  protection. 

The doctrinal research presented here meticulously analysed various case laws and statutory  provisions, including those reported in the Indian Express, The Hindu. And other citations to  understand the existing legal framework relevant to the examined issue. This comprehensive approach allowed for the identification of both strengths and limitations within the current legal  landscape. The research methodology employed involved careful selection and detailed  analysis of these legal sources to provide a robust conclusion grounded in legal principles. 

Reference(S)

  1. India Deepfake Rules News: As India looks to mandate AI content labelling, examining the growing menace of deepfakes 
  2. ‘Shark Tank India’ judge Vineeta Singh slams fake ‘death news’; shares screenshots of post saying “hard day for India …” – Times of India 
  3. Regulating deepfakes and generative AI in India | Explained – The Hindu
  4. What Is a Deepfake? Definition & Technology | Proofpoint US 5. Spooked by AI, Bollywood stars drag Google into fight for ‘personality rights’ | Reuters
  5. ‘Not Only Me’: Actor Anil Kapoor Wins AI Deepfake Court Case – Decrypt
  6. Rashmika Mandanna deepfake case: Main accused arrested in Andhra Pradesh – India Today
  7. Deepfakes in India: A Legal Analysis of Emerging Challenges and Regulatory Framework – LawArticle
  8. Gemini AI Saree Trend Rules Instagram; But are Your Photos Really Safe?
  9. Weaponizing reality: The evolution of deepfake technology | IBM

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top