Home » Blog » Artificial Intelligence and Data Protection in International Law

Artificial Intelligence and Data Protection in International Law

Authored By: Faten Wissam Yehia

Law Graduate From Lebanese University

Introduction

Artificial Intelligence (AI) has swiftly become a cornerstone of modern technological development. Whether in healthcare diagnostics, financial analytics, marketing personalization, or law enforcement tools, AI systems leverage vast swathes of personal data to train algorithms and generate decisions. This data dependency introduces significant legal and ethical challenges—especially regarding privacy, surveillance, and autonomy.

International law, once built upon state-to-state relations, now must confront technologies that easily transcend national borders and operate on global data flows. This article examines the current international legal framework addressing AI and data protection, explores its limitations, and proposes pathways toward robust, human rights–centred governance.

International Legal Foundations for Data Protection

Several international instruments collectively shape the landscape of data protection, though none were originally drafted specifically for AI:

OECD Privacy Guidelines (1980–2002)

The OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data, first adopted in 1980 and later consolidated, remain foundational. They establish key principles such as purpose specification, data quality, security safeguards, and accountability—principles that have informed later legal regimes globally.

Council of Europe Convention 108 (1981) and Convention 108+ (2018)

The Council of Europe Convention 108, adopted in 1981, is the first legally binding international treaty dedicated to personal data protection. It obliges Parties to guarantee data processing is fair, for specified purposes, and based on legal procedures, while granting individuals rights of access, rectification, and erasure.

In 2018, Convention 108 was modernized by an Amending Protocol—often referred to as Convention 108+. This updated instrument introduced data breach notification obligations, reinforced accountability, embedded privacy-by-design, and provided individuals with enhanced rights in algorithmic decision-making, such as the right to know logic and to object.

EU General Data Protection Regulation (GDPR, 2016–2018)

Although regional, the GDPR has had global influence due to its extraterritorial reach. It grants individuals extensive data rights—access, rectification, erasure (“right to be forgotten”), data portability, and automated decision-making protections—and imposes heavy obligations on data controllers and processors.

UN Human Rights Resolution (2023)

In UN General Assembly Resolution 78/213 on “Promotion and protection of human rights in the context of digital technologies”, adopted 22 December 2023, Member States reaffirmed that human rights must be protected in digital contexts—including AI systems—and called for governance frameworks ensuring safe, secure, and trustworthy technology development.

Human Rights Online: ICCPR / UNHRC

Although not AI-specific, the International Covenant on Civil and Political Rights (ICCPR) protects freedom of expression and privacy. The UN Human Rights Committee has affirmed that “the same rights that people have offline must also be protected online,” reinforcing applicability to digital and AI environments.

Unique Challenges Posed by AI to Data Protection

Several interrelated issues complicate the protection of data and privacy in AI contexts:

Cross‑Border Data Flows

AI often relies on cloud platforms and data collected from individuals across jurisdictions. This undermines enforcement of any single legal regime and demands harmonized cross-border privacy standards.

Opacity of Algorithms (“Black Boxes”)

AI’s opaque nature makes it difficult for individuals to understand how decisions are derived. Even developers may not fully explain the logic, which hampers transparency obligations under instruments like Convention 108+ and GDPR.

Bias and Discrimination

AI trained on biased datasets may perpetuate systemic discrimination. Legal frameworks often prohibit discrimination, but enforcing them against AI-driven private entities remains weak, especially across borders.

Fragmented Legal Landscape

Regional differences—between the EU’s strict GDPR, OECD’s softer guidelines, and more lenient frameworks elsewhere—create incoherent protections that challenge global AI deployment.

Highlights from Global Debates and Governance

UN Moratorium on Harmful AI

In 2021, UN High Commissioner for Human Rights Michelle Bachelet called for a moratorium on AI systems that violate human rights or whose risks cannot be mitigated, warning of their large-scale potential to erode rights like freedom of expression, assembly, and privacy.

Anti-Facial Recognition Movement

Civil society globally campaigns against facial recognition technology (FRT), citing racial and gender biases, mass surveillance, and civil liberties erosion. These critiques underscore the necessity of human rights harmonization in AI governance.

Towards a Stronger International Data Protection Framework for AI

To address AI’s unique threats while respecting innovation, multiple strategies could help evolve international law:

1. Establish a Binding AI‑Specific Data Protection Treaty

Building atop Convention 108+, such a treaty could cover data-driven AI and embed rights to transparency, fairness, contestability of decisions, and oversight.

2. Harmonize Regional and Multinational Standards

Align frameworks like GDPR, OECD Guidelines, APEC Privacy Framework, and emerging treaties to reduce jurisdictional fragmentation.

3. Mandate Corporate Accountability

Reinforce obligations under the UN Guiding Principles on Business and Human Rights (UNGPs), requiring AI developers to conduct human rights impact assessments and remediate harms.

4. Incorporate Ethical Principles into Law

Enshrine procedural safeguards—such as privacy by design, explainability, fairness reviews, and audit rights—into binding standards and certification frameworks.

5. Support Capacity Building for Global South States

Provide technical and legal resources to countries lacking strong data protection capacity so they can negotiate equitable data usage and safeguard citizens.

Conclusion

AI holds enormous potential to benefit society—but its deep dependence on personal data, and capacity to infringe fundamental rights, demand robust legal safeguards. Current international instruments (OECD Guidelines, Convention 108+, GDPR, ICCPR) provide strong principles, yet are insufficient to conquer AI’s novel obstacles.

The next stage of international law must embrace AI-specific regulation, enforceable transparency, fairness, and cross-border cooperation. Only by embedding human rights at the core of AI governance can we ensure that technology serves humanity—and does not undermine the values that define us.

Reference(S): (OSCOLA style)

OECD, OECD Guidelines on the Protection of Privacy and Trans‑border Flows of Personal Data (OECD Publishing, revised 2002)

Council of Europe, Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (CETS No 108, opened for signature 28 January 1981, entered into force 1 October 1985)

Council of Europe, Amending Protocol to Convention 108 (CETS No 223, adopted 18 May 2018)

European Union, General Data Protection Regulation (EU) 2016/679

UN General Assembly, ‘Promotion and protection of human rights in the context of digital technologies’ UNGA Res 78/213 (22 December 2023)

International Covenant on Civil and Political Rights (opened for signature 16 December 1966, entered into force 23 March 1976)

Michelle Bachelet, UN High Commissioner for Human Rights (2021) – moratorium call on AI

Anti‑facial recognition movement debate and concerns

“Same rights…protected online” UN Human Rights Committee commentary

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top