Home » Blog » Bridging the Gaps: Toward a Unified International Framework for Deepfake Regulationunder IP and Human Rights Law.

Bridging the Gaps: Toward a Unified International Framework for Deepfake Regulationunder IP and Human Rights Law.

Authored By: Akhilesh Kakade

High Court of Bombay

ABSTRACT

The present paper is a critical analysis of the patchwork regulation of deepfake technology across the globe, with a particular focus on the issues of the intersection of intellectual property (IP), privacy, data protection, and the international law on human rights. Deepfakes as fake or artificial audio-visual content are extremely dangerous as they allow identity theft, misinformation, and non-consensual abuses, but at the same time, they erase the line between real and fake digital space. The current national provisions regarding copyright, moral rights, privacy, and the accountability of platforms were originally developed with pre-AI conditions in mind and fail to resolve the harms, which are based on the synthetic impersonation and viral spreading. This study uses a comparative doctrinal study within the European Union, the United States, China, and India and assesses how the divergent regulatory forms, with all their focus on transparency and provenance, tend to offer portable assurances or cross-border enforcement. According to the study, the sectoral gaps the research criticizes include creative work has the protection of copyright and moral rights, but not synthetic identity; data streams have the protection of privacy laws, but not re-created identities; the human rights norms protect dignity, privacy, and reputation but are technology neutral and imprecise concerning the manipulation of AI. The harms of gendering are taken into consideration, particularly how women are the most common targets of deepfakes and how redress is not the available solution to the problem under the existing regimes. The main objectives are the need to pursue an integrated strategy, i.e. a coherent international legal framework, integrating the principles of IP, data protection, human rights, based on the principles of informed consent and transparency, cross-border takedown processes, and due diligence by AI supply chain actors. The paper suggests a WIPO-UNESCO Protocol, with a TRIPS annex, to appreciate synthetic impersonation as a separate legal wrong and create interoperable remedies, labelling norms and speedy global reaction systems. The artistic, scholarly and transformative uses are not ruined out of secrecy and agreement, as safe harbors are preserved. To conclude, the regulatory gaps could be closed only with the help of a layered, rights-based, and internationally movable concept, which would guarantee the accountability of platforms, the autonomy, and dignity of individuals in the fast-changing digital environment.

Keywords: Deepfakes, International Law, Intellectual Property, Privacy and Human Rights

INTRODUCTION

The twenty-first century has seen an unprecedented meeting of artificial intelligence and digital creativity, and the creation of deepfakes- synthetically generated or manipulated audio-visual content that can convincingly replicate a person’s appearance, voice or expressions.[1] This technology, which utilizes most effectively Generative Adversarial Networks (GANs), started as an experiment in the fields of computer vision and entertainment, but quickly became a disruptive technology with both transformative and dangerous implications.[2] Deepfakes create a blurring of the line between reality and fabrication: while deepfakes can be used for legitimate purposes, like artistic expression, parody, and to increase accessibility tools, deepfakes have also become tools for misinformation, identity theft, political manipulation and non-consensual sexual imagery.[3] Their increasing sophistication and ease of dissemination have led to a worldwide crisis of authenticity and have undermined public trust and exposed the inadequacies of legal mechanisms to address harm.

Although countries have implemented a series of laws for privacy, data protection and intellectual property, deepfake phenomenon cuts cross the legal silos. Each framework was developed for a previous technological era, outfitted to govern simulation of identity via artificial intelligence to create a fragmented landscape of liability and remedies across borders.[4] From an intellectual property perspective, deepfakes challenge notions of human authorship and originality on which copyright law turns; in situations where manipulation or generation has no obvious human author held accountable, traditional categories struggle to fit.[5]

Even moral rights – most notably the right of integrity under Berne Convention Article 6bis – are geared towards the treatment of works, and not the impersonation of identity, leaving a protection gap where a person’s likeness or voice are synthetically replicated without consent.[6] In privacy and data protection, the issue is focused on the capture and reuse of biometric data[7] (faces, voices) for training and synthesis, but policy work in EU and beyond is aware of the threat and patchy and jurisdiction bound.[8] At the level of international human rights, the International Covenant on Civil and Political Rights protects privacy (Article 17) and the freedom of expression (Article 19), although these mid-twentieth century norms were not written keeping in mind synthetic identity, thus forcing courts and regulators to balance dignity and autonomy with speech and innovation without clear and shared standards.[9]

The article opines that unless intellectual-property, privacy and human-rights frameworks are brought together into a cohesive international regime, deepfake technology will continue to exploit loopholes in the law. Through comparative doctrinal analysis – of the European Union, the United States, China and India – and engagement with international human rights discourse, the paper builds out the normative contours of a multilateral instrument that incorporates authorship, consent, transparency and accountability as part of a workable global response.

The organization of the paper is as follows:

Chapter II- provides an explanation of the conceptual and technological basis of deepfakes;

Chapter III- outlines the aspects related to IP (authorship, originality, moral rights, and publicity/personality interests);

Chapter IV- handles the issues related to privacy and data protection (biometric data, consent, and responsibility of the platforms);

Chapter V- incorporates deepfakes in the context of international human rights law (privacy, dignity, expression);

Chapter VI- compares the regulatory responses; and

Chapter VII- suggests the development of a treaty-level framework that can globally harmonize while still allowing legitimate innovation.

CONCEPTUAL AND TECHNOLOGICAL FOUNDATIONS

Deepfakes are synthetic audio-visual or audio-only artefacts that mimic the face, voice or mannerisms of a real person with high fidelity by learning the patterns of that real person from a large data set and then being able to generate something that looks and/or sounds similar. In practice, most of the state-of-the-art systems that are used for realistic face or scene synthesis are generative models – namely Generative Adversarial Networks (GANs) or, more recently, diffusion models. A typical GAN has a generator network that creates samples and a discriminator that attempts to identify the fake samples from the actual ones; the training is done under the form of a minimax game until the generator is good enough to fool the discriminator.[10] Diffusion models, by contrast, learn to denoise increasingly corrupted data, undoing a stochastic noising process, and they are now on par or surpassing GANs in image quality/ controllability when trained at scale.[11] These architectures, combined with modern computer vision pipelines (face detection, landmarking, segmentation and reenactment) and powerful consumer GPUs have made realistic identity swaps, lip-syncing “talking heads” and voice clones possible from any few seconds of reference material.

How a deepfake is made.

A typical video deepfake pipeline includes: (i) data acquisition (image/frames of source + target, alignment), (ii) model training (autoencoder/GAN/diffusion, learn identity features from source conditioned on target pose + expression), (iii) inference + compositing (synthetic face, colour/illumination match, temporal smoothing and flicker reduction). This pipeline is recorded in open-source frameworks such as DeepFaceLab and made available to the average user, in the form of trained components, reducing the skill barrier of producing face-swaps.[12] To generate talking heads and redubbing, there are specialized lip-sync systems which learn an audiovisual synchronization goal, where a target face moves in time with arbitrary speech input, even when the individual did not say those words.[13] The same logic applies to audio deepfakes: neural voice cloning systems can synthesis speech in a target’s timbre by using a handful of samples by either fine-tuning a multi-speaker model (speaker adaptation) or by inferring a speaker embedding (speaker encoding).[14] Newer neural-codec language models (e.g. VALL-E) approach text-to-speech as conditional language modelling over discrete audio tokens which allow for zero-shot imitation from a three-second voice prompt and maintain prosody and background conditions with uncanny accuracy.[15]

Typology.

For legal analysis it is helpful to organize deepfakes along two axes, in terms of intent and then use-case. On the harmless side, there is parody/satire and transformative art, including consensual performances and accessibility uses (e.g. dubbing movies in different languages while keeping their lips synchronized). On the negative side, we have (a) political/strategic impersonation (creation of speech by a public figure to influence public decision-making); (b) pornographic/sexual exploitation (unconsented sexualized images or videos using a person’s likeness); and (c) commercial deception (fake endorsement, brand impersonation, corporate fraud and social engineering). Computer-vision scholarship classifies these use-cases under families of manipulation such as identity swap, attribute/expressive manipulation, face reenactment, entire face synthesis and audio-visual asynchrony, and enables them to be talked about using the same vocabulary for their detection and policy.[16] Empirically, there have been multiple studies and investigations that identified that non-consensual sexual deepfakes are a big portion of what circulates online – a pattern with severe consequences for women’s privacy, dignity and safety.[17]

Cross-domain impacts – misinformation and election integrity.

Political Deepfakes take advantage of the human propensity to rely on audiovisual ‘evidence’, using the fallacy of the illusion of eye witnessing as a way of eroding epistemic trust. During Russia’s invasion of Ukraine in 2022, a video claiming to show President Zelenskyy giving an order for troops to surrender went viral for a short period, only to be debunked and taken down; the video, however, though amateurish, showed the power of the format and how quickly such content can be transmitted in crisis situations.[18] In January 2024 on the eve of the New Hampshire presidential primary, AI-generated robocalls simulated the voice of President Biden to discourage people from turning out, leading to criminal charges by the New Hampshire Attorney General and federal enforcement activity in the United States.10 The FCC clarified that AI-generated voices in robocalls are unlawful under existing rules.[19] These incidents illustrate how deepfakes can (i) suppress or distort participation, (ii) seed plausible deniability (“the liar’s dividend,” whereby genuine footage is dismissed as fake), and (iii) overwhelm verification systems during fast-moving events-all of which burden election administrators, platforms, and fact-checkers across jurisdictions.

Cross domain effects – economic and brand damage

Impersonation fraud and damage to reputation are on the rise in the commercial sector. In a highly publicized case in 2019, fraudsters cloned the voice of a CEO and tricked a subordinate into wiring more than US$240,000 to an offshore account; the model’s success was explained to investigators by the less obvious clues in the sound of the voice (accent and “melody” of speech).[20] In 2024, the Hong Kong Police reported a loss of multi-million dollars at a global engineering firm after staff have been duped by a video conference full of AI-generated impersonations of executives – a step up from single channel voice attack to multi-modal deception.[21] Other than direct theft, brand deepfakes (fake endorsements or doctored adverts) lead to consumer-protection and publicity/right-of-personality issues, and synthetic reviews and fake testimonials undermine market integrity and expose platforms and advertisers to regulatory watchdogs. This means that the compliance vector for organizations has grown to include identity-verification in workflows, media provenance and watermarking, and even incident response to malicious virality.

Why the technology is important to law.

The architecture of the deepfake systems has direct implications in the design of liability. Data provenance issues (unconsented scraping of face images/voice samples) are putting pressure on privacy and biometric data regimes while media provenance standards (content credentials/watermarking) are being developed to make synthetic content traceable throughout the lifecycle.[22] Model architecture (GAN v diffusion) is also a factor in determining the detectability of artefacts, and therefore the viability of ex post technical defenses, as computer vision surveys show the recurrent families of manipulation and clues to their detection.[23] Few-shot voice cloning collapses consent barriers by making it possible to achieve credible imitation with far fewer samples[24] and lip sync/reenactment tools (e.g. Wav2Lip) allow composite manipulations where neither the underlying video nor audio alone is forged making it difficult to enforce rules of evidence and moderation on platforms[25]. The supply chain of a deepfake (from acquisition of data to model training to post-production) aligns with possible points of duty bearing: curators of the data, model providers, applications developers, application deployers, and platforms hosting or amplifying content, and these correspond to the notions of due diligence in the UN Guiding Principles of Business and Human Rights[26] and the duties of platforms around governance in the EU Digital Services Act.[27] Due to the ease to high-quality generation on a commodity hardware using widely available open source tools[28], jurisdiction-bound remedies have a hard time keeping up with cross-border dissemination, a problem not unknown from cybercrime cooperation under the Budapest Convention.[29]

For the purposes of the analysis that follows, in this article, a three-tier taxonomy of the uses of deepfakes is adopted. For the doctrinal chapters that follow, the three-level taxonomy structures the doctrinal analysis that follows. Tier 1 (Benign/Transformative) addresses consensual parody, satire, art, restoration and dubbing (provided that consent, credit and labelling are dealt with). Tier 2 (Ambiguous/context-dependent) – In this category are included political satire, journalistic reconstructions and accessibility use-cases with the legality and legitimacy depending upon context and label. Tier 3 (Malicious/Deceptive) incorporates (i) political/electoral disinformation and foreign-influence operations; (ii) non-consensual sexual imagery, harassment and extortion; and (iii) commercial deception – impersonation, brand/endorsement fabrications and executive fraud. This is a structure based on taxonomies of computer vision and legal scholarship that relate technical forms (swap, reenactment, synthesis; audio, visual, audio-visual) to legal interests (authorship/moral rights; privacy/data protection; reputation/dignity; consumer protection/unfair competition); and to the international dimension of platform governance and cross-border takedown cooperation[30]  and on the “liar’s dividend”.[31]

III. INTELLECTUAL PROPERTY DIMENSIONS

Deepfakes are a grey area in copyright law in that they typically re-contextualize or recombine existing expression with the creation of new identity features (face, voice, style). Copyright law covers the original expression in source works and in many systems the exclusive right of the author to prepare derivative works – that is, works “based upon one or more pre-existing works” that recast, transform, or adapt them.[32] As a practical matter, a face-swap or voice-clone that lifts frames, stills, sound recordings or other protectable portions of a source work in order to build the synthetic output can implicate reproduction and derivative work rights. U.S. guidance emphasizes that derivative status turns on whether protectable expression from the underlying work has been recast or transformed into the new work.[33] Platform liability is also of importance: Under Article 17 of the Directive (EU) 2019/790 on the copyright and related rights in the Digital Single Market (DSM Directive)[34], certain user-upload services carry out an act of communication to the public and have to obtain authorization (e.g. licenses) or high diligence, placing some deepfake dissemination in a special liability regime for online content-sharing service providers. The Commission’s guidance makes clear how Article 17 should work in practice, including in relation to protecting the user’s exceptions and limitations.[35]

Moral rights are a complementary but not complete layer. Article 6bis of the Berne Convention protects[36] the rights of authors of paternity and integrity which have as their aim the prevention of distortion, mutilation or other treatment of the work, prejudicial to the author’s honor or reputation. The Indian Copyright Act 1957 under Section 57[37] has introduced moral rights, which permit authors to prohibit or sue for damages for derogatory uses of his/her works. While these provisions easily translate into applications to works manipulations (for example, a manipulation of a performance, painting, or film still), they do not conveniently apply to impersonation of the identity of a living person, when such impersonation does not involve any underlying work from that person. Whereas a deepfake creates a “new” performance of a singer or an utterance of a speaker without having copied their existing protected recordings, classic moral rights tools can be a poor fit; the argument then moves away from a claim for integrity of a work to a claim for control over identity, an aspect not addressed by the author-centric structure of the DSM Directive.[38]

Trademark and personality doctrines fill the gap left by copyright and moral rights. As deepfakes are used to impersonate logos, to mimic get-ups or to use speech patterns associated with a brand, trademark law intervenes in the context of source-identification harms – namely confusion, association and dilution. More importantly, the right of personality used to control the commercial value of identity (voice, name, likeness) is activated where a synthetic performance is used to sell or endorse products and/or convey endorsements without permission. U.S. courts recognized voice misappropriation in Midler v Ford Motor Co[39] (sound-alike Bette Midler commercial) and granted damages for false endorsement and voice misappropriation in Waits v Frito-Lay[40] (sound-alike Tom Waits radio advertisement). In Titan v Ramkumar Jewelers[41], the Delhi High Court identified personality rights in India in the context of unauthorized use of images of celebrities for advertising purposes, which indicates that unauthorized use of celebrity identity for commercial purposes without consent is actionable under the doctrines of passing off and allied doctrines. Most recently, the Bombay High Court in the case Asha Bhosle v Mayk Inc[42] (Bombay HC interim order), provided interim relief to singer Asha Bhosle against AI voice-cloning and unauthorized trading of models mimicking her voice, calling such cloning a breach of personality rights and calling for takedowns – the first judicial reaction to misuse of synthetic-media in India.[43]

These threads are strung together in the right of publicity / voice rights. The Restatement (Third) of Unfair Competition § 46[44] defines liability where a person “appropriates the commercial value of a person’s identity by using the person’s name, likeness, or other indicia of identity without permission for purposes of trade,” a formulation that obviously covers voice and other non-visual indicia. California’s statute also forbids knowledge of commercial use of another’s name, voice, signature, photograph, or likeness without permission.[45] Although Midler and Waits are discussing the well-known fact that an identifiable voice can be a protectable indicium of identity even when no copyrighted recording is copied, this is an important insight for AI voice cloning cases where the synthetic output is new audio work that nonetheless monetizes the celebrity’s persona. In the Indian context, the courts have relied on personality rights and passing off as having been drawn upon in the case of Titan and Asha Bhosle that there is a pathway towards an explicit protection of voice and likeness from AI-enabled misappropriation.

Collectively, the copyright-moral rights-personality/publicity triad provides a set of tools for deepfake controversies, but each branch presents different predicates: copyright and moral rights safeguard works and their creators; trademark prevents consumer confusion and dilution; publicity/personality protects commercial interests related to identity. For cross-border deepfakes – voice clone endorsements and brand impostors – often effective redress depends on hybrid pleading (copyright in the use of source works, in the case of trademarks, for confusion/dilution, and for publicity/personality for appropriation of identity) supported by platform level duties under regimes such as Article 17.[46] The doctrinal conflict at their interfaces highlights the importance of, as developed subsequently in this article, a unified international framework which explicitly address the issue of synthetic identity as a legal interest, while allowing space for the kinds of parody and legitimate transformative uses.

PRIVACY AND DATA PROTECTION DIMENSIONS

Deepfakes highlight a structural contradiction in privacy law: it is premised on the ability to quantify and model a person’s identity – the geometry of a face, the way a person walks, the timbre of a voice – and yet much of current data protection is premised on the regulation of uses of “personal data” and not synthetic re-performances of such an identity. EU law already acknowledges that biometric data[47] (i.e. data that is “personal data” that is generated by particular technical processing of physical or behavioral characteristics of natural persons that make it possible to uniquely identify them) needs specific attention and is, in its absence of specific exceptions, is generally prohibited unless a specific exception applies.[48] Read together with guidance on consent, this framework sends a signal that scraping selfies or public videos and transforming them into faceprints or voiceprints cannot possibly be justified by vague, implied permission: consent must be freely given, specific, informed and unambiguous and data subjects must be able to refuse or withdraw from consent without detriment.[49] Companies have been fined for violating those principles in the use of facial recognition scraping Clearview AI’s bulk scraping of publicly available photos was found unlawful and sanctioned by multiple EU DPAs[50], without a proper legal basis or adequate protection of data subjects (despite the fact that the images were public).[51]

The new Digital Personal Data Protection Act 2023 (DPDP) in India defines “personal data” widely as information about an identified natural person and “digital personal data” as personal data in a digital form but it does not develop special categories (i.e., “biometric data” and so on) as the GDPR does.[52] That loophole is further augmented by an exclusion: the Act does not apply where personal information have been made publicly available by the data principal or under a legal obligation.[53] In practical terms, it may be argued by the creators of deepfakes that training on public images or videos falls outside the scope of the DPDP, even where these materials, when modelled, are used to synthesize the likeness of a person. This mismatch – between identity-as-data in machine learning and subject-centred protections in statute – animates the core problem: privacy rules govern processing events, but deepfakes cause identity substitution harms which are not exhausted by data-flow compliance.

Consent doctrine illustrates the strain. Under the GDPR, controllers cannot seek implied consent based on mere public availability; and, neither can they use “manifestly made public” as an open door to processing special category data such as biometrics for purposes of identification.[54] European authorities have gone to great lengths to deny the premise that public posting equals permission to be scraped into the biometric templates, and to be distributed and/or redeployed by others: the French CNIL’s 20 million euros decision against Clearview AI and parallel actions of the Italian and other Data Protection Authorities on Clearview AI framed scraping and the creation of biometric templates as unlawful acts, and ordered erasure and cessation.[55] In India, in contrast, the public-availability carve out of the DPDP would likely collapse this analysis at the threshold, such that recourse would be to torts or criminal statutes rather than data protection as such.[56]

Partially bridging that gap between reputation torts and criminal law, non-consensual sexual deepfakes. Privacy injunctions have been frequently granted by English courts against the publication of intimate images (including where a defendant is “person unknown”) where the harm is seen as a consequence of the immediacy and virality of the online environment.[57] Historically England and Wales criminalized “revenge pornography” under section 33 of the Criminal Justice and Courts Act 2015 (later amended to include threats)[58] – and even though that specific offence was repealed and replaced with the Online Safety Act 2023 – the framework that preceded it will continue to be relevant in terms of conduct pre-2024, as well as the doctrinal trajectory from privacy to image-based abuse.[59] This case law establishes that even images that initially had a consensual or public character can nevertheless, in a re-contextualization, infringe upon privacy and dignity interests – an intuition that is indeed the same intuition which motivates the EU biometric special-category rules.

Platform obligations then are the second line of defence. In India, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 (as amended), have imposed due diligence obligations on intermediaries, including grievance redressal mechanisms, expedited takedown mechanisms on actual knowledge and cooperation with lawful orders – all of which can be used by victims to have deepfakes swiftly taken down.[60] In the EU, the Digital Services Act (DSA) systems-regulation model obliges platforms to set up mechanisms of notice and action, statements of reasons for decisions and internal complaint-handling, which are concentrated in Articles 14-17[61], and this introduces procedural levers for challenging synthetic impersonations despite data-protection bases being disputed.[62] These responsibilities do not, by themselves, make a deepfake “illegal,” but they reduce the friction of remediation and standardize transparency, which is important when the harm is time-sensitive and metastasizes through propagation.

The important point here is that the regimes of privacy were developed to regulate processing of personal data and in the EU to fence off biometrics because they reveal identity. Deepfakes can be used to transform those identity markers into artificial performances without ever having to store the original data after they have been trained as a model; the harm in a deepfake is not necessarily the result of a single processing step. Article 9 of EU law partially internalizes such risk in the context of enforcement against scraping, whereas India’s DPDP (which has no special-category protection, and whose exclusivity from public availability has persisted) leaves more work for general torts, image-based abuse offences, and platform-duty laws to do. The point, then, is not that privacy law is blind, but that it is under-delivering, in that, while it focuses on misuse of data, it is ill-equipped to grasp the unauthorized re-creation of identity which deepfakes make possible. The next chapter takes this one step further by charting how IP and personality-rights principles can be combined with data-protection and platform-governance tools to secure identity without falling victim to chokepoints that immunize lawful parody or speech.

INTERNATIONAL HUMAN RIGHTS LAW INTERFACE

Deepfakes reveal a fault-line in the international human rights architecture: they transform the building blocks of identity-face, voice, gesture-into infinitely reproducible artefacts that can be detached from autonomy, context and consent. This begins with the right to privacy and dignity. Article 17 of International Covenant on Civil and Political Rights (ICCPR)[63] guarantees everyone against arbitrary or unlawful interference with “privacy, family, home or correspondence,” against unlawful attacks on honour and reputation, and also obliges states to ensure that such interference or attacks can be adequately protected by law. Article 12 of the Universal Declaration of Human Rights (UDHR) also protects the same right and is the normative context for modern privacy law.[64]  The European Court of Human Rights (ECtHR) has also expanded such guarantees, in Article 8 ECHR: Von Hannover v Germany[65] (No 1) concluded that the repeated publication of photographs of a public figure carrying out purely private activities was in violation of her right to respect for private life, that the boundary between debate of general interest and prurience about private life is blurred. More recently, in M.L. v Slovakia[66] the Court emphasized that reputation interests can be within the scope of Article 8 and that domestic courts have to apply a careful and structured balancing of expression and private life if sensational allegations and images are circulated, even indirectly affecting family members. Together these authorities affirm that privacy and reputation are not parochial tort interests, but fundamental rights that limit the ways in which identity can be exposed and recontextualized – limitations that deepfakes routinely challenge.

Any regulatory response to it must consider freedom of expression, while protecting victims from identity-based harm. Article 19 of the International Covenant on Civil and Political Rights (ICCPR) guarantees the freedom to seek, receive and disseminate information and ideas of all kinds; however, para 3 allows restrictions only if they meet a stringent three-part test: they must be prescribed by law, serve a legitimate purpose (e.g. respect for the rights or reputations of others), and are necessary and proportionate.[67] The Human Rights Committee in its General Comment No 34 further explains the need for the least intrusive measure to achieve the protective function and that mere offensiveness or shock are not a ground to restrict speech.[68] Applying these standards to deepfakes would provide narrow prior restraint while allowing for narrow responses, such as labeling requirements, takedown orders and civil remedies when the synthetic content substantially violates privacy, dignity or reputation and narrow remedies would not. This proportionality frame also acknowledges legitimate parody and artistic experimentation as long as it is clearly marked, while prohibiting secretive impersonation that uses someone’s identity to mislead audiences or cause harm.

Deepfakes are gendered, which adds complexity to the human rights calculus. According to the Convention on the Elimination of All Forms of Discrimination against Women (CEDAW), the states must denounce any form of discrimination (Article 2), and alter patterns in social and cultural life, which impose stereotypes and inequality (Article 5).[69] The General Recommendation No 35 (2017), an update to GR 19 by the CEDAW Committee, acknowledges technology-facilitated gender-based violence which is a subset of a continuum of violence on women and demands comprehensive legislative, policy and preventive interventions, including in relation to digital violence against women.[70] Since 2024-2025 UN system reporting has also recorded that AI and generative tools are accelerating image-based abuse, habitual lack of consent deepfake pornography which overwhelmingly targets women and girls[71], and calls on states to move to consider numerous consent-based criminalization and civil redress without unnecessary evidently heavy demands on the victim.[72] Read against ICCPR Articles 2 and 26 (non-discrimination), these instruments allow for a standard of due diligence under which states must prevent, investigate, punish and remedy gender-based online violence including digital images of underage individuals and impersonations online.

Human rights law also sets out state obligations towards private actors. The UN Guiding Principles on Business and Human Rights (UNGPs) outline a tripartite structure, the state duty to protect against third-party abuses, the corporate duty to respect human rights through human rights policy commitments, human rights due diligence and remediation, and access to remedy.[73] Under this framework, states should make generative systems human rights due diligence obligations (for example through risk assessment of identity-based harms, implementation of consent-based controls for training on biometric data, implementation of provenance and labelling tools) and effective judicial and non-judicial remedies in the event of abuse. Extraterritorial and cross-border elements-which are omnipresent in all deepfake diffusion-are not in themselves an excuse for these obligations; on the contrary, they only require regulatory co-operation and interoperable norms to ensure that solutions are not foiled by jurisdictional arbitrage.

The interface of human rights, which brings together these strands, provides a principled and under-specified framework of governance of deepfakes. Synthetic exposure and reputational attack is harmful to privacy and dignity norms (ICCPR Article 17; UDHR Article 12; ECtHR Article 8 case law); freedom of expression (Article 19) constrains regulatory answers in terms of legality, legitimacy and proportionality; gender-equality instruments (CEDAW and GR 35) require special consideration of technology-facilitated violence against women; and the UNGPs distributes the role between states and corporations. Still, there exist gaps: the tools are largely technology neutral, and do not explicitly cover synthetic identity manipulation, but it is not clear where harm begins, how much consent plays a role in the training and generation, and where responsibility lies in the multiplex AI supply chains. To put these human-rights principles into practice in the form of operational rules: clear definitions of synthetic impersonation; consent and notice requirements; interoperable provenance/label requirements; interoperable cross-border takedown cooperation; and remedies designed not to anemically to choke legitimate expression, the case is to implement an international instrument, whether a protocol or set of model provisions, or coordinated soft-law supported by platform obligations, etc. In the following chapter, this rights-based template is used on comparative regulatory answers, and design options on a harmonized global framework are outlined.

COMPARATIVE REGULATORY RESPONSES

Three models come into view regarding the regulation of deepfake, identified in a comparative lens: regulation of EU-type systems with clear transparency obligations, patchwork statutes and disputed platform responsibility of the US, and the licensing-and-labelling regime of China, with India being based on intermediary due diligence and as its privacy rule-set matures. The common denominator is disclosure and provenance that binds them together however scope, enforceability and remedies are at cross-purposes.

European Union. Article 50 on transparency (final text) of the AI Act imposes responsibilities of deepfakes in Article 50 (Chapter IV), rather than Articles 52-54 as in earlier versions.[74] The providers should ensure the design of systems such that the people are aware of their interaction with AI; deployers of emotion recognition and biometric categorization should inform the users; and most importantly, deployers of AI generating or manipulating image, audio or video content that constitutes a deep fake should disclose the artificial creation or manipulation of the content. The rule excludes clearly identified artistic or satirical publications and the disclosure requirement of AI-generated text where the text is meant to alert the population, unless the text has been subject to human editorial oversight. The Recitals put it straightforward why: to curb the risks of impersonation and deception and retain free expression and the arts.[75] It is through placing deepfake labelling in the AI Act and not the DSA that the EU establishes a technology-specific transparency standard that can be applied to all sectors and that leaves the question of illegality to substantive law (e.g. privacy, IP, consumer protection).

United States. The federal landscape is still disjointed. On a state level, California AB 602 (2019) [76] resulted in a civil cause of action on non-consensual sexually explicit deepfakes, statutory damages and injunctive relief, one of the first instances of the synthetic identity abuse being considered sufficient to support standalone remedies. California passed AB 2655 (Defending Democracy from Deepfake Deception Act)[77], which made large online platforms block or label material deceptive election-related content during specified pre-/ post-election phases.[78] However, in August 2025 a federal judge in Sacramento preliminarily enjoined the provisions of AB 2655 indicating constitutional and Section 230 backlash to state-enforced platform moderation regulations regarding political speech.[79] An example of such a law at the federal level would be the DEEPFAKES Accountability Act (H.R. 5586, 118th Congress)[80], which would impose transparency (disclosure/watermark) obligations and give victims a civil remedy, though it failed—as an example of the challenging nature of establishing a consistent federal regime that strikes a balance between speech and remediation. The overall result is a patchwork: direct civil remedies (e.g. AB 602), platform responsibilities that are liable to litigation (AB 2655), and no national framework (so platform own policies and tort law are left to do much of the work).

China. The most prescriptive one is the Provisions on the Administration of Deep Synthesis Internet Information Services (effective 10 January 2023). The providers should label AI-generated or manipulated content in a conspicuous manner, technical identifiers (e.g. watermarks/metadata), secure consent to edit the voice or picture of an individual, avoid misuse via content management and security audits, and register some services.[81] Later labelling proposals, including those for 2024-2025, are based on this architecture but introduce visible labels to the user and machine-readable identifiers to support detection and provenance to further hold accountability along the content supply chain.[82] The Chinese model, therefore, pairs licensing, provenance and takedown with controls upstream on providers, focusing instead on fast enforcement and locating sources rather than case-by-case balancing tests.

India. India does not have a deepfake-specific statute. The new Intermediary Guidelines and Digital Media Ethics Code (IT Rules), 2021[83] requires due diligence of the intermediaries of deepfakes, such as the mechanism of complaints, notice-and-takedown, and cooperation with lawful orders, but not the substantive labelling or transparency requirements. Digital Personal Data Protection Act 2023 is operational, with the current draft DPDP Rules, 2025, still subject to consultation, operationalizing consent, breach notification, and security requirements, but making no (yet) reference to labelling or provenance of deepfakes.[84] In effect, victims have to combine platform remedies under the IT Rules and claims under tort, criminal law (e.g. image-based abuse), and, where possible, personality/publicity doctrines. This is because the lack of a statutory disclosure/label requirement creates a provenance responsibility based on platform policies and ad-hoc orders instead of horizontal.

Comparative critique. The AI Act of the EU is an innovating horizontal transparency rule of deepfakes; the US places remedies as to the remedies of states (easy to challenge under the First Amendment and the Section 230 defense); China insists on source-to-sink traceability, consent; India on the transparency of the intermediary process; and its privacy regulations are still in maturity. All these regimes, though, provide no such thing as TRIPS-type coordination, minimum standards, national treatment, and binding dispute settlement that would be able to keep jurisdiction arbitrage in cross-border synthetic impersonation under control. An analogous soft-law package in the context of TRIPS may commence with mutual recognition of deepfake labelling, basic consent and notice to identity cloning, interoperable content credentials, and expeditious cross-border takedown cooperation, but allow domestic speech balancing. The moral of the story is evident: provenance in the absence of portability will not work; portability in the absence of minimum rights of the person depicted, will not be chilled. The enforcement gap that purely national solutions are unable to address would be bridged by a lean, internationally portable ruleset, whose realization would be based on platform due diligence.

VII. TOWARD A UNIFIED INTERNATIONAL FRAMEWORK

A workable regime for deepfakes will have to address the problem of synthetic identity and ensure that it is protected as an interest, while leaving room for legitimate expression. The basic values are clear: human dignity, informed consent, transparency and accountability. Dignity frames the harm a person’s likeliness or voice is not raw material; consent, including with the use of biometric data for training or generation of synthetic biometric data; transparency so that audiences are not deceived; responsibility assigned across the AI supply chain – for example, to those curating datasets, those providing models, and those using and providing platforms. Moreover, the basic level of horizontal transparency prescribed by the EU’s AI Act already applies to the disclosure obligation of content being artificially generated or manipulated which provides a regulatory anchor that could be generalized by an international instrument that could be applied to scenarios outside the EU.[85]

Institutionally, an agile instrument is to be preferred to a full treaty. This chapter is a proposal for a WIPO-UNESCO Joint Protocol on Synthetic Media and Identity Protection as a soft law, called the Protocol, with opt-in implementation annexes. WIPO brings to the table doctrinal expertise and existing AI/IP policy fora, while UNESCO contributes human rights-based frameworks and media governance and platform accountability.[86] The Protocol would provide for common definitions (e.g. synthetic impersonation, consensual synthetic performance, provenance metadata) and minimum obligations, and leave some flexibility for instruments to be developed regionally to flesh out the Protocol. To prevent the duplication of existing obligations, the Protocol should be cross-referential to the principle on disclosing deepfakes in the AI Act for those that are in the EU, leaving to other Parties a parallel, tech-neutral deepfake disclosure rule.[87]

In order to hard wire the enforceability across borders, the Protocol should be accompanied by an annex to TRIPS in the form of an interpretive Annex to TRIPS introducing a new Article 39A[88] (AI Identity Misuse) that (i) recognizes the trade value of personal indicia (name, voice, likeness) as being subject to protection against misappropriation in trade, (ii) binds Parties to providing effective civil remedies and preliminary measures for cross-border synthetic impersonation in trade, and (iii) encourages mutual recognition of content provenance signals as being evidence of misrepresentation or deception. This would stand beside TRIPS Article 39 on undisclosed information, taking advantage of the enforcement disciplines (provisional measures, injunctions, damages) in TRIPS without transforming identity into a property; focuses on unfair competition and deception, not on exclusivity.[89]

Three aspects are critical to this. First, provenance and labelling: Parties should adopt a mandatory, interoperable labelling standard for AI generated/ manipulated image, audio and video with human readable notices and machine-readable provenance (e.g. C2PA content credentials) attached at the time of creation and preserved through edits wherever technically feasible.[90] Second, rapid cross-border takedown: model the layer of cooperation on the notice and takedown workflows of the INHOPE network[91] – designated national hotlines, trusted flaggers, standardized evidentiary templates, and service-level targets for removal – generalized out of CSAM for synthetic impersonation that violates the Protocol.[92] Third, due diligence: require AI developers and AI deployers carry out human rights and consumer protection impact assessments for identity risks and implement consent management for training data where biometric identifiers are implicated. ship provenance/labelling on by default – this is in line with the UN Guiding Principles on Business and Human Rights.[93]

Procedurally, Parties must be committed to interoperable remedies as opposed to identical statutes. A “minimums and mutual-assistance” model would: (a) codify synthetic impersonation and non-consensual synthetic sexual imagery as civil wrongs with a fast track to injunctive relief; (b) acknowledge foreign takedown notices under the Protocol (with limited public-policy exceptions); (c) fast track data preservation and cross-platform notices; and (d) allow platforms to act quickly without preemptively judging liability. Inspiration here can be taken from the Budapest Convention on Cybercrime, where the combination of minimum criminalization, procedural measures and 24/7 contact points has resulted in operational cooperation extending beyond the text of the treaty – an architecture flexible enough to be applied to civil or platform governance contexts.[94]

There should be safe harbours in the balancing of innovation and expression. The Protocol ought to provide an exception in respect of parody, satire, scholarship, reporting and artistic experimentation where (i) there is no intention of deception (ii) the disclosure is made prominent and close to the content and (iii) no substitute-for-endorsement is established in business. To address the needs of journalism and documentary applications, such as the identification of editorial control as a disclosure route (in keeping with the approach of the AI Act to text informing the public), the Protocol ought to have review and upgrade provisions to monitor technical change (e.g. diffusion-model watermark robustness) and a compliance dashboard to Parties and platforms.

Overall, instead of trying to establish a single treaty, the suggested course of action is a stack of responses: a WIPO-UNESCO Protocol to achieve codification of principles and minimum responsibilities; a TRIPS Annex 39A to entrench commercial remedies across borders; provenance and labelling, using open technical standards; unwary developer and platform takedown cooperation as envisioned by INHOPE; and due diligence as envisioned by the UNGP. With this combination, the current enforcement gap of provenance but not portability and rights but no remedies would be closed and rights-respecting, interoperable global deepfake governance would be achieved.  

VIII. CONCLUSION

This paper has revealed that the legal reaction to deepfakes is disjointed in three areas, which did not initially intend to tackle synthetic identity. The laws of copyright and moral-rights are responsive to safeguard works and their creators, and not to defend imitation of the image or voice of a person in the new work. Police of privacy and data-protection govern the handling of personal data, occasionally by isolating biometrics, but fail to cope when the main damage lies in a re-performance of identity, which remains effective despite the post-processing extraction of the data into a model. The normative backbone of the international human-rights law, namely privacy, dignity and reputation, and the freedom of expression, is technology-neutral and under-specified on the extent of harm, the place of consent in training and generation, and the responsibility distribution in the AI supply chains. Together, these silos leave victims and platforms to maneuver inconsistent responsibilities, which feature jurisdiction and solutions that are too sluggish to possible viral ills.

There are also deepfakes that go across disciplinary and jurisdictional borders. They have cross-border channels of production, including dataset curators, model providers, deployers and intermediaries; the channels of production are distributed natively across expression, commerce and intimate life uses. Any government that targets one node or doctrininal element will overlook the systemic nature of the problem. The comparative survey highlights this fact the transparency-first paradigm of the European Union, the patchwork of targeted state solutions in the United States, the licensing-and-labeling model in China, and the intermediary-due-diligence model in India are all puzzle pieces, none of which provides portable guarantees and non-portable enforcement. What is required is a rights based internationally portable model that acknowledges the existence of synthetic impersonation as a discrete wrong law whilst not limiting legitimate parody, satire and transformative uses.

In line with this, the postulated solution in the article is a layered one, comprised of the fundamental principles of dignity, informed consent, transparency and accountability; a WIPO-UNESCO protocol to codify common definitions, provenance/label obligations and rapid takedown collaboration; a TRIPS-style annex to anchor cross-border commercial remedies without turning identity into property; UNGP-compliant due diligence by developers and platforms; and safe harbours of well-publicized artistic, journalistic and research uses. Three fronts, including AI accountability, via supply-chain liability testing models; jurisdictional enforcement, via recognition of foreign takedown orders, expedited preservation and conflicts-of-law policies suitable to real-time virality; ethical standards, via consent architectures of biometric data, opt-outs and post-mortem/cultural interests, and privacy of journalists, researchers and cultural communities, ought to be deepened in future research. Such a concerted, philosophical system is the only way to seal the gaps in protection in the contemporary world and ensure the freedom to innovation and expression.

Reference(S):

[1] European Parliamentary Research Service, Generative AI and Deepfakes: Assessing Risks of AI-Generated Content(2023) https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/757583/EPRS_BRI(2023)757583_EN.pdf

[2] Mohamed R Shoaib and others, ‘Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models’ (2023) arXiv:2311.17394 https://arxiv.org/abs/2311.17394

[3] Danielle K Citron and Robert Chesney, ‘Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security’ (2019) 107 California Law Review 1753 https://doi.org/10.2139/ssrn.3213954  

[4] European Parliament, Tackling Deepfakes in European Policy (EPRS STOA Study, 2021) https://www.europarl.europa.eu/thinktank/en/document/EPRS_STU(2021)690039 accessed 8 October 2025.

[5] Jane C Ginsburg and Luke A Budiardjo, ‘Authors and Machines’ (2019) 34 Berkeley Technology Law Journal 343 https://www.law.berkeley.edu/wp-content/uploads/2024/01/Authors-and-Machines-Ginsburg.pdf

[6] Berne Convention for the Protection of Literary and Artistic Works (as amended 28 September 1979) art 6bis, WIPO Lex TRT/BERNE/001 https://www.wipo.int/wipolex/en/text/283693

[7]

[8] European Parliamentary Research Service (n 1) ; UNESCO, Guidelines for the Governance of Digital Platforms: Safeguarding Freedom of Expression and Access to Information (2023) https://www.unesco.org/en/internet-trust/guidelines 

[9] 8 International Covenant on Civil and Political Rights (adopted 16 December 1966, entered into force 23 March 1976) 999 UNTS 171 arts 17, 19 https://www.ohchr.org/en/instruments-mechanisms/instruments/international-covenant-civil-and-political-rights

[10] Ian J Goodfellow and others, ‘Generative Adversarial Nets’ in Advances in Neural Information Processing Systems 27(NeurIPS 2014) https://papers.neurips.cc/paper/5423-generative-adversarial-nets.pdf

[11] Jonathan Ho, Ajay Jain and Pieter Abbeel, ‘Denoising Diffusion Probabilistic Models’ (NeurIPS 2020) https://proceedings.neurips.cc/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf

[12] Ivan Perov and others, ‘DeepFaceLab: Integrated, Flexible and Extensible Face-Swapping Framework’ (2020) arXiv:2005.05535 https://arxiv.org/abs/2005.05535

[13] KR Prajwal and others, ‘A Lip Sync Expert Is All You Need for Speech-to-Lip Generation in the Wild’ (2020) https://arxiv.org/abs/2008.10010

[14] Sercan Ö Arik and others, ‘Neural Voice Cloning with a Few Samples’ (2018) arXiv:1802.06006 https://arxiv.org/abs/1802.06006

[15] Chengyi Wang and others, ‘Neural Codec Language Models are Zero-Shot Text-to-Speech Synthesizers (VALL-E)’ (2023) arXiv:2301.02111 https://arxiv.org/abs/2301.02111

[16] Rubén Tolosana and others, ‘DeepFakes and Beyond: A Survey of Face Manipulation and Fake Detection’ (2020) Information Fusion 64 131–148 https://www.sciencedirect.com/science/article/pii/S1566253520303110; (open-access preprint: https://arxiv.org/abs/2001.00179).

[17] Deeptrace (now Sensity), The State of Deepfakes (2019) https://regmedia.co.uk/2019/10/08/deepfake_report.pdf

[18] Andy Greenberg, ‘A Zelensky Deepfake Was Quickly Defeated. The Next One Might Not Be’ WIRED (17 March 2022) https://www.wired.com/story/zelensky-deepfake-facebook-twitter-playbook/

[19] NPR, ‘A political consultant faces charges and fines for Biden deepfake robocalls’ (23 May 2024) https://www.vpm.org/npr-news/npr-news/2024-05-23/a-political-consultant-faces-charges-and-fines-for-biden-deepfake-robocalls

[20] Drew Harwell, ‘An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft’ Washington Post (4 September 2019) https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/

[21] Dan Milmo, ‘UK engineering firm Arup falls victim to £20m deepfake scam’ The Guardian (17 May 2024) https://www.theguardian.com/technology/article/2024/may/17/uk-engineering-arup-deepfake-scam-hong-kong-ai-video

[22] National Institute of Standards and Technology, AI Risk Management Framework 1.0 (NIST AI 100-1) (2023) https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf

[23] Tolosana and others (n 15).

[24] Arik and others (n 13)

[25] Prajwal and others (n 12).

[26] UN Office of the High Commissioner for Human Rights, Guiding Principles on Business and Human Rights (2011) https://www.ohchr.org/sites/default/files/documents/publications/guidingprinciplesbusinesshr_en.pdf

[27] Regulation (EU) 2022/2065 on a Single Market for Digital Services (Digital Services Act) [2022] OJ L277/1 https://eur-lex.europa.eu/eli/reg/2022/2065/oj

[28] Perov and others (n 11).

[29] Council of Europe, Convention on Cybercrime (Budapest Convention), ETS No 185 https://www.coe.int/en/web/conventions/full-list?module=treaty-detail&treatynum=185

[30] Tolosana and others (n 15).

[31] Yisroel Mirsky and Wenke Lee, ‘The Creation and Detection of Deepfakes: A Survey’ (2021) 54(1) ACM Computing Surveys 1 https://dl.acm.org/doi/10.1145/3425780

[32] 17 USC § 101 (definition of “derivative work”) https://www.law.cornell.edu/uscode/text/17/101

[33] US Copyright Office, Circular 14: Copyright in Derivative Works and Compilations (rev 2020) https://www.copyright.gov/circs/circ14.pdf

[34] Directive (EU) 2019/790 on copyright and related rights in the Digital Single Market, art 17 (EUR-Lex) https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX%3A32019L0790

[35] European Commission, Guidance on Article 17 of Directive 2019/790 COM(2021) 288 final https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX%3A52021DC0288

[36] Berne Convention for the Protection of Literary and Artistic Works (as amended 1979) art 6bis (LII) https://www.law.cornell.edu/treaties/berne/6bis.html

[37] India, Copyright Act 1957, s 57 (IndiaCode) https://www.indiacode.nic.in/show-data?actid=AC_CEN_9_30_00006_195714_1517807321712&orderno=78

[38] Directive (EU) 2019/790 (n 33) art 17.

[39] Midler v Ford Motor Co 849 F 2d 460 (9th Cir 1988) (official PDF via Harvard) https://cyber.harvard.edu/people/tfisher/1988%20Midler.pdf

[40] Waits v Frito-Lay Inc 978 F 2d 1093 (9th Cir 1992) (Justia) https://law.justia.com/cases/federal/appellate-courts/F2/978/1093/183202/

[41] Titan Industries Ltd v Ramkumar Jewellers 2012 SCC OnLine Del 2382, (2012) 50 PTC 486 (Del) (Delhi HC, 26 April 2012)

[42] Asha Bhosle v Mayk Inc (Bombay High Court, Commercial Division), Interim Application (L) No 30382 of 2025 in Commercial IP Suit (L) No 30262 of 2025, 29 September 2025 (ad-interim order, Arif S Doctor J).

[43] Swati Deshpande, ‘Mumbai: AI voice cloning violates celebrity’s personality rights, says Bombay high court on singer Asha Bhosle’s plea’ The Times of India (Mumbai, 2 October 2025) https://timesofindia.indiatimes.com/city/mumbai/mumbai-ai-voice-cloning-violates-celebritys-personality-rights-says-bombay-high-court-on-singer-asha-bhosles-plea/articleshow/124264867.cms

[44] American Law Institute, Restatement (Third) of Unfair Competition (1995) § 46; see an accessible summary at RightOfPublicity.com https://rightofpublicity.com/statutes/restatement-third-of-unfair-competition-s46-49

[45] California Civil Code § 3344 (FindLaw) https://codes.findlaw.com/ca/civil-code/civ-sect-3344/

[46] Directive (EU) 2019/790 (n 33) art 17.

[47] Regulation (EU) 2016/679 (General Data Protection Regulation) art 4(14) https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX%3A32016R0679

[48] Regulation (EU) 2016/679 (General Data Protection Regulation) art 9(1) https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX%3A32016R0679

[49] European Data Protection Board, Guidelines 05/2020 on consent under Regulation 2016/679 (4 May 2020) https://www.edpb.europa.eu/sites/default/files/files/file1/edpb_guidelines_202005_consent_en.pdf

[50] European Data Protection Board, ‘The French SA fines Clearview AI EUR 20 million’ (2022) https://www.edpb.europa.eu/news/national-news/2022/french-sa-fines-clearview-ai-eur-20-million_en

[51] CNIL, Deliberation No SAN-2022-019 (Clearview AI) (17 October 2022) https://www.cnil.fr/sites/default/files/atoms/files/deliberation_of_the_restricted_committee_no_san-2022-019_of_17_october_2022_concerning_clearview_ai.pdf

[52] Digital Personal Data Protection Act 2023 (India) s 2(t), s 2(n) (MeitY Gazette PDF) https://www.meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf

[53] Digital Personal Data Protection Act 2023 (India) s 3(c)(ii) (MeitY Gazette PDF) https://www.meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf

[54] European Data Protection Board, Guidelines 05/2020 on consent (n 49)

[55] CNIL, Deliberation No SAN-2022-019 (Clearview AI) (n 50).

[56] Digital Personal Data Protection Act 2023 (n 51) s 2(t).

[57] LJY v Persons Unknown [2017] EWHC 3230 (QB) https://inforrm.org/wp-content/uploads/2017/10/ljy-v-persons-unknown.pdf; JPH v XYZ & Ors [2015] EWHC 2871 (QB) (summary) https://www.carruthers-law.co.uk/news/jph-v-xyz-ors-2015-ewhc-2871/

[58] Criminal Justice and Courts Act 2015, s 33 https://www.legislation.gov.uk/ukpga/2015/2/section/33

[59] Crown Prosecution Service, ‘Communications Offences’ (24 March 2025) https://www.cps.gov.uk/legal-guidance/communications-offences; UK Sentencing Council, ‘Disclosing or threatening to disclose private sexual images’ (guideline) https://sentencingcouncil.org.uk/guidelines/disclosing-or-threatening-to-disclose-private-sexual-images/

[60] Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 (as amended to 6 April 2023) (MeitY official PDF) https://www.meity.gov.in/static/uploads/2024/02/Information-Technology-Intermediary-Guidelines-and-Digital-Media-Ethics-Code-Rules-2021-updated-06.04.2023-.pdf

[61] Regulation (EU) 2022/2065 (Digital Services Act) arts 14–17 (OJ text) https://eur-lex.europa.eu/eli/reg/2022/2065/oj/eng

[62] Commission summary ‘Digital Services Act’ https://eur-lex.europa.eu/EN/legal-content/summary/digital-services-act.html

[63] International Covenant on Civil and Political Rights (adopted 16 December 1966, entered into force 23 March 1976) art 17 https://www.ohchr.org/en/instruments-mechanisms/instruments/international-covenant-civil-and-political-rights

[64] Universal Declaration of Human Rights (adopted 10 December 1948) art 12 https://www.un.org/en/about-us/universal-declaration-of-human-rights

[65] Von Hannover v Germany (No 1) App no 59320/00 (ECtHR, 24 June 2004) https://hudoc.echr.coe.int/eng?i=001-61853

[66] M.L. v Slovakia App no 34159/17 (ECtHR, 14 October 2021) https://hudoc.echr.coe.int/eng?i=002-13434

[67] ICCPR (n 62) art 19(3). 

[68] UN Human Rights Committee, General Comment No 34: Article 19: Freedoms of opinion and expression (12 September 2011) CCPR/C/GC/34 https://www.ohchr.org/sites/default/files/english/bodies/hrc/docs/gc34.pdf

[69] Convention on the Elimination of All Forms of Discrimination against Women (adopted 18 December 1979, entered into force 3 September 1981) arts 2, 5 https://www.ohchr.org/en/instruments-mechanisms/instruments/convention-elimination-all-forms-discrimination-against-women

[70] CEDAW Committee, General Recommendation No 35 on gender-based violence against women, updating General Recommendation No 19 (26 July 2017) CEDAW/C/GC/35 https://www.ohchr.org/en/documents/general-comments-and-recommendations/general-recommendation-no-35-2017-gender-based

[71] UN Women, FAQs: Digital abuse, trolling, stalking and other forms of technology-facilitated violence against women (10 February 2025) https://www.unwomen.org/en/articles/faqs/digital-abuse-trolling-stalking-and-other-forms-of-technology-facilitated-violence-against-women

[72] Human Rights Council, Draft resolution A/HRC/56/L.15: Technology-facilitated gender-based violence (3 July 2024) https://docs.un.org/en/A/HRC/56/L.15

[73] UN Office of the High Commissioner for Human Rights, Guiding Principles on Business and Human Rights: Implementing the United Nations “Protect, Respect and Remedy” Framework (2011) https://www.ohchr.org/documents/publications/guidingprinciplesbusinesshr_en.pdf

[74] Regulation (EU) 2024/1689 (Artificial Intelligence Act) art 50(1)–(4) (OJ L 12 July 2024) https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ%3AL_202401689

[75] AI Act, recital 134 (deep-fake labelling rationale) https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ%3AL_202401689

[76] California Civil Code § 1708.86 (enacted by AB 602 (2019)) https://codes.findlaw.com/ca/civil-code/civ-sect-1708-86/

[77] AB 602 (2019) https://legiscan.com/CA/text/AB602/id/2055866

[78] Governor of California, ‘Governor Newsom signs bills to combat deepfake election content’ (17 September 2024) https://www.gov.ca.gov/2024/09/17/governor-newsom-signs-bills-to-combat-deepfake-election-content/

[79] Politico, ‘Elon Musk and X notch court win against California deepfake law’ (5 August 2025) https://www.politico.com/news/2025/08/05/elon-musk-x-court-win-california-deepfake-law-00494936

[80] H.R. 5586 (118th Cong. 2023–2024), ‘DEEPFAKES Accountability Act’ (bill text and summary) https://www.congress.gov/bill/118th-congress/house-bill/5586/text

[81] Cyberspace Administration of China, Provisions on the Administration of Deep Synthesis Internet Information Services(effective 10 January 2023) (English trans., ChinaLawTranslate) https://www.chinalawtranslate.com/en/deep-synthesis/; IAPP, ‘China’s deep synthesis regulation takes effect Jan. 10’ (2023) https://iapp.org/news/b/chinas-deepfake-regulation-takes-effect-jan-10

[82] Latham & Watkins, ‘China’s New AI Regulations’ (Client Alert, 16 August 2023) 3–4 https://www.lw.com/admin/upload/SiteAttachments/Chinas-New-AI-Regulations.pdf; TwoBirds, ‘New AI content labelling rules in China—what are they and how do they compare to the EU AI Act?’ (20 May 2025) https://www.twobirds.com/en/insights/2025/new-ai-content-labelling-rules-in-china-what-are-they-and-how-do-they-compare-to-the-eu-ai-act

[83] Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 (as updated 6 April 2023) https://www.meity.gov.in/static/uploads/2024/02/Information-Technology-Intermediary-Guidelines-and-Digital-Media-Ethics-Code-Rules-2021-updated-06.04.2023-.pdf

[84] Press Information Bureau (India), ‘Draft Digital Personal Data Protection Rules, 2025—public consultation’ (26 July 2025) https://www.pib.gov.in/PressReleasePage.aspx?PRID=2148944

[85] Regulation (EU) 2024/1689 (Artificial Intelligence Act) art 50 (transparency obligations) https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng

[86] WIPO, ‘Artificial Intelligence and Intellectual Property Policy: WIPO Conversation’ (policy hub; issues paper and sessions) https://www.wipo.int/en/web/frontier-technologies/artificial-intelligence/conversation; UNESCO, Guidelines for the Governance of Digital Platforms (2023) https://www.unesco.org/en/internet-trust/guidelines

[87] Regulation (EU) 2024/1689 (n 84) art 50.

[88] WTO, Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) (1994) art 39 https://www.wto.org/english/docs_e/legal_e/27-trips.pdf; WTO, TRIPS Article 39 – Practice Note (2024) https://www.wto.org/english/res_e/publications_e/ai17_e/trips_art39_oth.pdf

[89] WIPO, Guide to Trade Secrets and Innovation (web publication) pt III (explaining TRIPS art 39 baseline) https://www.wipo.int/web-publications/wipo-guide-to-trade-secrets-and-innovation/en/part-iii-basics-of-trade-secret-protection.html

[90] Coalition for Content Provenance and Authenticity (C2PA), Technical Specification v1.3 (2023) https://spec.c2pa.org/specifications/specifications/1.3/specs/_attachments/C2PA_Specification.pdf; C2PA, Specification (current) https://c2pa.org/specifications/specifications/2.2/specs/C2PA_Specification.html

[91] INHOPE, ‘Association of Internet Hotline Providers’ (network overview) https://www.inhope.org/

[92] INHOPE, ‘Notice & Takedown (NTD)’ (explainer) https://www.inhope.org/EN/articles/notice-and-takedown-ntd

[93] UN Office of the High Commissioner for Human Rights, Guiding Principles on Business and Human Rights: Implementing the United Nations “Protect, Respect and Remedy” Framework (2011) https://www.ohchr.org/documents/publications/guidingprinciplesbusinesshr_en.pdf; UN Digital Library record https://digitallibrary.un.org/record/720245

[94] Council of Europe, ‘About the Convention on Cybercrime (Budapest Convention)’ https://www.coe.int/en/web/cybercrime/the-budapest-convention; Council of Europe, Convention on Cybercrime (ETS 185), Explanatory Report (official English text) https://rm.coe.int/1680081561

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top