Authored By: Rajdeep Dutta
University of Calcutta
Abstract:
The rapid expansion of artificial intelligence has enabled the creation of hyper-realistic “deepfakes,” synthetic audio-visual content capable of convincingly manipulating identity and reality. While the technology offers legitimate creative uses, its misuse has accelerated harms such as non-consensual sexual imagery, political misinformation, fraud, and reputational damage. This article examines how major jurisdictions-including the United States, European Union, China, and India-have responded to the deepfake phenomenon. China became the first country to adopt deepfake-specific regulations, imposing strict labelling, identity verification, and platform-monitoring obligations. The EU’s AI Act and GDPR (General Data Protection Regulation) establish the world’s strongest transparency and data-protection standards, while the United States relies on fragmented state-level laws constrained by free-speech concerns. India continues to depend on the IT Act and DPDP Act, with proposals for dedicated regulation. Through comparative analysis, the article identifies regulatory gaps, enforcement challenges, and the need for harmonised global frameworks to address deepfakes in an Al-driven world.
Introduction:
A recent case of “Abhishek Bachchan vs Bollywood Tea Shop and ors.” brought on the question of personality rights and how deep fakes were being used to present a distorted idea of reality. For the first time in India, the court was bothered by the implications of digital personification and the issues it can pose. A Deep Fake can be understood as a machine-altered or artificial image, audio, or video generated by a relatively modern style of machine learning called “deep,” hence the name. In this process, the algorithm is supplied with examples and told to generate an output that resembles the examples and the references available to it. Deepfakes are materials that are more often than not the furthest thing from the truth and present the most distorted and deranged versions of the same.
Many times, Facebook or Instagram users have seen and gone through specific “trends” where they click on a button, and the screen shows them how they should have looked in the past or how they would look 20 years into the future. The concept of photo and video editing is not new at all, and Snapchat has a filter that lets one swap faces with another. The difference between these and a deep fake is that the purpose of technology was to create harmless iterations of any image that can be a source of momentary amusement for a person, rather than a permanent scar on the dignity of an individual.
In this modern world where much of people consume information from the internet, deep fakes have served as a mode of harassment for individuals. While it is known that a large portion of the information available on the internet is adulterated, credible sources are facing a decay. Individuals and businesses are facing extreme information sabotage, exploitation and intimidation due to this advancement of technology, and while it’s sure to serve many good purposes, for now, it’s creating materials that a large extent of unsophisticated consumers are failing to differentiate from reality. Deep fakes have emerged as a threat to democracy and the personal and privacy rights of the people. Deep fakes are largely being used to manipulate the minds of those gullible and susceptible, and are being used to spread propaganda and to sway the tides of decision resting upon the minds of these people.
Artificial Intelligence is a tool that has been made available to every person who can manage a connection with the internet and fathom strands of imagination that can be culminated into a deep fake. People with no idea what the consequences of their actions can be on a person are typing commands to create materials of their heart’s desire and spreading them around through different channels.
India, to this day, has no statutory laws that can tackle the challenge of deep fakes. This article talks about the severe lack of legal framework surrounding deep fakes around the world and how legal institutions across borders have tried to address the ongoing issue.
The European Union:
Lawmakers around the globe have struggled to keep pace with the fast advancement of AI and the capabilities it harbours. The European Union entered the conversation by enacting the European AI Act or Regulations (EU) 2024/1689. This was the first-ever attempt at a legal framework that would directly address the issues caused by artificial intelligence and deep fakes. It is aimed at fostering trust and accountability between the people of Europe and the internet. The Act prohibits certain types of AI systems or the usage of AI to do certain tasks, like –
- Using AI to create manipulative, subliminal or materials that are deceptive in nature and show the behaviour of an entity in a manner that can cause harm.
- Using AI for extortion of money by preying on the different factors. These include age, whereas an AI chatbot can cause a child to get addicted to its features. AI systems can also use elderly people with limited technological knowledge and manipulate them into spending money on unnecessary or expensive medicines. AI chatbots can manipulate disabled people who have limited cognitive functions into buying care packages or a fake mental health partner. Some AI chatbots can target people with economic hardship, collect their data and sell them to loan-providing agencies who, in turn, worsen their condition.
- Classifying and categorisation based on sensitive attributes like race, caste, sex, politics or personal beliefs or grouping based on the same.
- Profiling and assessing the probabilities of individuals to commit a crime.
- Using AI for ‘real-time’ remote biometric identification in pubic accessible sectors, with the exceptions being when it’s used for identifying missing people, preventing a threat to life or identifying suspects in serious crimes.
The AI providers must have an established risk management system, ensure a proper training manual of the AI is available and is up to date, keep a record for identifying relevant information, have a quality management system and ensure the highest levels of accuracy.
The United States of America:
The United States to date don’t have a singular active statute that deals explicitly with deep fakes, but the U.S Congress has presented multiple acts, none of which have become federal till date. Some of the introduced acts are:
- REAL Political Advertisement Act (2023-2024)- Presented at the 118th Congress, the bill asks for mandatory and clear disclaimers and accountability in cases where synthetic media is being used to create advertisements for political campaigns.
- DEEPFAKES Accountability Act (2023-2024): The bill proposed mandatory watermarking of synthetic media and indicated transparent accountability. The bill also proposed criminal liability at the discretion of the courts for the use of deepfakes to harm the personal liberty or dignity of people.
Many celebrities like Scarlett Johansson, Taylor Swift, and Steve Harvey and others have been heavily targeted by deep fakes. Many of them have raised their voices in support of the bills introduced by Congress and want a structured framework around the threat of deep fakes.
The State of California introduced two Assembly Bills, namely AB 602 and AB 730, to curb the terrors and distress of deep fakes. AB 602, bypassing the previous statutes on “revenge porn”, creates a civil remedy for both the creator and the person spreading adulterated and digitally altered content of sexually explicit nature, with claimable damages ranging from $1500 to $150,000 in cases of tried and proven mens rea. AB 730 talks about the use and distribution of fake and digitally altered depictions of candidates belonging to public offices and other elected officials within 60 days of an election, used to spread hate or propaganda against the person. The aggrieved party is liable for an injunctive relief and counsel fees.
People’s Republic of China:
China, by far, has the most complete and solid regulatory framework for dealing with the threat of deep fakes. On 10th January, 2023, the Cyberspace Administration of China (CAC) enforced the Provisions on the Administration of Deep Synthesis Internet Information Services.. These provisions enforce rules such as
- Mandatory labelling of AI content with a superimposed watermark that clearly mentions that the material in question is synthetic.
- Platforms that deploy AI mechanisms used for deep fakes must obtain the real identity of the person, their current mobile number and an identity verification.
- Mandatory detection models and content moderation that stop the creation and spread of harmful media.
- The AI algorithm belonging to the companies must be registered with the Chinese government and follow compliance.
China has been a country that has always wanted to control internet usage within its borders. China has laws in place that protect the personal rights of its people on the internet, and those laws extend to deep fakes. Laws like
- Personal Information Protection Law (2021): This law prevents the usage of a person’s voice, face, image or likeness without obtaining prior consent.
- Cybersecurity Law (2017) and Data Security Law (2022): These laws were enforced to ensure the prevention of the circulation of fake and adulterated information and that biometric information remains protected.
Where does India stand?
In a recent development, Ankur Warikoo, a motivational speaker, entrepreneur, wealth management advisor and a very well-known media personality, moved to the Delhi High Court seeking John Doe injunctions for his likelihood that was being used to create deepfakes, which promoted fraudulent WhatsApp groups and other financial and investment schemes, which would cause significant reputation and financial damages to the people who had put their faith in him. The court understood that AI deepfakes, when used in such harmful ways, can cause serious harm to personality rights. It was understood that the similarity between his real content and the deepfake videos was extensive and had immense chances of public confusion and fraud, and would harm the public goodwill he had earned. The court passed the injunction that was prayed for and prevented John Doe from publishing or sharing content with Ankur Warikoo’s likeness.
A new bill was introduced in the Rajya Sabha on the 7th of February, 2025, named as The DeepFake Prevention and Criminalisation Bill of 2023, which aimed at immediate prevention and criminalisation of the use of deepfake content without consent and a clear watermark stating the use of AI deepfake. The bill talked about the creation of the National Deepfake Mitigation and Digital Authenticity Task Force.The task force would be burdened with:
- Evaluation of the spread of deepfakes and the degree of effect they have on the personal lives of people, businesses and the functioning of the Government.
- Evaluation of all the risks that deepfakes bring with them, surrounding public privacy.
- To determine how much the deepfakes are influencing the minds of the general public participation including the electorate.
- Evaluation of the degree of penalty that can be imposed on an accused, varying from case to case.
- Recommend guidelines to social media intermediaries to uphold the basics of data privacy online.
Conclusion:
The legal framework that surrounds AI deepfakes in India is patchy at best and by no means sufficient to address each and every challenge it poses. Not only in India, but around the world, many legal institutions are failing to keep up with the velocity of technological advancement and are leaving behind unclear or incomplete laws dealing with the online threat. Bills are not being turned into Acts so as to protect promises of the Constitution or free speech as promised in the Amendments. The debate of which media is harmful and which is a mere satire is raging on, but one thing is certain that day by day, the number of people aggrieved by these deepfakes is increasing rapidly. As the world slowly but surely enters an AI-led world, the boundaries of law have become the need of the hour for society to uphold its trust in digital media and law altogether.





