Authored By: S.M. Sanjidul Islam Niloy
Stamford University Bangladesh
Abstract
Artificial Intelligence (AI) has transformed the global digital economy by enabling advanced data-driven technologies. However, the proliferation of AI raises profound legal and ethical questions regarding the protection of personal data. While the European Union’s General Data Protection Regulation (GDPR) remains the most comprehensive global framework, the United States relies on a fragmented, sectoral approach, and South Asian jurisdictions such as India and Bangladesh are still developing their regimes. This article examines the enforcement challenges posed by AI across these regions, focusing on issues such as automated decision-making, algorithmic bias, cross-border data transfers, and institutional capacity. Through a doctrinal and comparative analysis, the article argues that existing legal frameworks are insufficient to address AI-specific risks. It concludes by suggesting reforms, including AI-focused amendments, stronger regulatory bodies, and enhanced global cooperation.
Introduction
The 21st century has been defined by the rise of data as the “new oil” of the digital economy. From online banking and e-commerce to social media and healthcare, personal information is collected, stored, and processed at an unprecedented scale. Artificial Intelligence (AI), which thrives on massive datasets, intensifies both the benefits and risks of this digital
transformation. Algorithms now decide who gets hired, what medical treatment is recommended, and even how courts assess bail risks. Yet, these technologies simultaneously threaten privacy, equality, and accountability.
Data protection laws seek to establish safeguards for individuals by imposing obligations on data controllers and processors. However, the emergence of AI challenges traditional legal concepts such as consent, purpose limitation, and proportionality. For instance, AI-driven predictive analytics often processes data beyond its original collection purpose, raising concerns about lawful basis and transparency. Moreover, automated decision-making— central to AI—conflicts with the human-centric safeguards envisioned in many privacy regimes.
Globally, jurisdictions have adopted divergent approaches. The European Union (EU), through its General Data Protection Regulation (GDPR), enforces some of the strictest requirements on personal data handling, including special provisions for automated
processing. The United States (US), in contrast, maintains a fragmented model with sector specific laws, such as the California Consumer Privacy Act (CCPA), lacking a comprehensive federal framework. In South Asia, developments are relatively recent: India’s Digital Personal Data Protection Act 2023 is the first comprehensive privacy law, while Bangladesh has proposed a draft Data Protection Bill, which faces criticism for weak enforcement mechanisms.
The urgency of analyzing these diverse frameworks lies in the fact that AI transcends borders. A predictive policing tool developed in Silicon Valley may be deployed in Dhaka, while data collected in Bangalore may be processed in Europe. Enforcement challenges, therefore, extend beyond domestic jurisdictions to questions of cross-border governance.
This article argues that while legal regimes have made significant progress in recognizing privacy as a fundamental right, they remain inadequate in addressing AI-specific risks. By comparing the EU, US, India, and Bangladesh, this article identifies enforcement gaps and proposes reforms for creating a more effective, globally coordinated legal framework.
Research Methodology
This article adopts a doctrinal and comparative research methodology. A doctrinal approach is employed to analyze statutory provisions, judicial decisions, and regulatory frameworks governing data protection and AI in different jurisdictions. Primary sources include:
∙ The General Data Protection Regulation (GDPR) (EU),
∙ The California Consumer Privacy Act (CCPA) and related U.S. state/federal regulations,
∙ The Digital Personal Data Protection Act 2023 (India), and
∙ The Draft Data Protection Bill of Bangladesh.
Case law forms an essential foundation of this study. Landmark rulings such as Google Spain v. AEPD, Schrems I and II, and India’s Justice K.S. Puttaswamy v. Union of India are examined to assess judicial recognition of privacy rights.
Comparative analysis highlights divergences between developed and developing jurisdictions in their ability to enforce privacy laws against AI-driven practices. Secondary sources, such as scholarly articles, government reports, and media analyses, are used to contextualize primary materials. The article is analytical rather than descriptive: it critiques the adequacy of existing laws and explores their effectiveness in practice.
Main Body
- Legal Frameworks
- European Union: The GDPR
The GDPR, enforced in 2018, is widely regarded as the gold standard for global data protection. Its provisions are particularly significant in the context of AI. Article 22 prohibits decisions “based solely on automated processing” that significantly affect individuals, unless specific safeguards exist. Furthermore, Articles 5 and 6 codify principles such as purpose limitation, data minimization, and lawfulness of processing.
In practice, GDPR places strict obligations on AI developers and deployers. For example, AI systems used in credit scoring must ensure transparency and fairness, allowing individuals to request human intervention. However, enforcement remains challenging. Supervisory authorities, such as the European Data Protection Board (EDPB), face resource limitations, and penalties, though significant (up to 4% of global turnover), are unevenly applied across member states.
- United States: A Fragmented Approach
Unlike the EU, the U.S. lacks a comprehensive federal privacy law. Instead, it relies on sectoral statutes such as the Health Insurance Portability and Accountability Act (HIPAA), the Children’s Online Privacy Protection Act (COPPA), and state laws like the California Consumer Privacy Act (CCPA) and its amendment, the California Privacy Rights Act (CPRA).
This fragmented model creates gaps in regulating AI-driven data processing. While California provides consumers the right to opt out of automated decision-making, other states offer weaker protections. Moreover, the Federal Trade Commission (FTC) plays a central role in consumer protection but lacks explicit AI-related enforcement powers. The result is regulatory inconsistency, which is problematic given AI’s cross-border, cross-sectoral applications.
- India: The Digital Personal Data Protection Act 2023
India passed the Digital Personal Data Protection Act (DPDP Act) 2023, marking its first comprehensive data protection regime. The Act establishes the Data Protection Board of India to oversee compliance. Key features include rights of individuals to access, correct, and erase personal data, along with obligations on data fiduciaries.
While the Act reflects global trends, it has limitations in dealing with AI. The law emphasizes consent as the primary legal basis for processing, yet in AI contexts, meaningful consent is often impossible due to algorithmic opacity and the scale of processing. Moreover, the Board’s enforcement powers are still evolving, raising concerns about institutional independence and capacity.
- Bangladesh: Draft Data Protection Bill
Bangladesh has proposed a Draft Data Protection Bill modeled partly on India’s approach. It seeks to establish a Data Protection Authority and recognizes data subject rights. However, critics argue that the bill prioritizes state surveillance interests over individual privacy. Broad exemptions for government agencies undermine the balance between security and personal liberty.
In the context of AI, such exemptions could allow unregulated deployment of facial recognition systems or predictive policing tools, raising serious human rights concerns. Without strong enforcement mechanisms, the bill risks being symbolic rather than transformative.
- Judicial & Regulatory Interpretation
- European Union
The EU’s courts have played a pivotal role in shaping the contours of data protection in the digital age.
∙ Google Spain v. AEPD (2014): The Court of Justice of the European Union (CJEU) recognized the “right to be forgotten”, holding that individuals can request removal of personal data from search engines if it is outdated or irrelevant.
∙ Schrems I (2015) and Schrems II (2020): The CJEU invalidated the Safe Harbor and Privacy Shield agreements, respectively, due to inadequate U.S. safeguards against government surveillance.
Despite these strong judicial pronouncements, enforcement faces difficulties. Supervisory authorities often lack uniformity in interpretations, and large technology companies exploit jurisdictional fragmentation within the EU.
- United States
In the absence of a federal privacy statute, U.S. courts and regulators have filled the gap selectively.
∙ FTC v. Facebook (2019): The Federal Trade Commission imposed a record $5 billion fine for privacy violations.
∙ Carpenter v. United States (2018): The Supreme Court held that accessing cell-site location information without a warrant violates the Fourth Amendment.
However, enforcement remains patchy. The absence of explicit AI provisions in federal law means that issues such as algorithmic bias and automated decision-making are addressed inconsistently.
- India
∙ Justice K.S. Puttaswamy v. Union of India (2017): The Supreme Court unanimously declared the right to privacy as a fundamental right under Article 21 of the Constitution.
∙ K.S. Puttaswamy (Aadhaar) v. Union of India (2018): While upholding the Aadhaar scheme, the Court restricted its use, emphasizing proportionality and safeguards.
Nevertheless, courts have yet to address AI-specific cases directly, though the groundwork for such disputes is laid.
- Bangladesh
Bangladesh’s judiciary has been slower in articulating privacy jurisprudence. Explicit recognition of privacy as a fundamental right is limited, and courts have yet to rule decisively on AI-related disputes. Judicial interpretation will be crucial once the Data Protection Bill is enacted.
- Critical Analysis
- Algorithmic Opacity and Consent Fatigue: AI’s “black box” nature undermines meaningful consent, creating risks of uninformed or coerced agreements. 2. Automated Decision-Making and Human Rights Risks: Bias in AI systems threatens equality, with weak safeguards in most jurisdictions.
- Cross-Border Enforcement Challenges: AI development is transnational, yet regulators lack jurisdictional reach, as illustrated by the Schrems cases. 4. Weak Institutional Capacity in Developing Jurisdictions: Enforcement depends on strong regulators. India and Bangladesh lack resources and independence, risking symbolic rather than substantive protection.
- Recent Developments
- EU AI Act (2024): Introduces a risk-based approach to AI regulation, complementing GDPR.
- United States: Non-binding initiatives like the AI Bill of Rights (2022) and Biden’s Executive Order (2023).
- India: DPDP Act 2023 creates data fiduciaries but leaves AI-specific gaps. 4. Bangladesh: Draft Data Protection Bill criticized for privileging state surveillance over rights.
Suggestions / Way Forward
- AI-Specific Amendments: Data protection laws should explicitly address algorithmic transparency, fairness, and accountability.
- Cross-Border Cooperation: Harmonized frameworks, such as mutual recognition of adequacy standards, are essential.
- Independent and Resourced Regulators: Data protection authorities must have financial autonomy and technical expertise to monitor AI.
- Public Awareness and Corporate Responsibility: Citizens should be educated about AI risks, while companies should adopt ethical AI guidelines.
- Judicial Activism in Developing States: Courts in India and Bangladesh must proactively interpret constitutional rights to restrain surveillance and ensure AI accountability.
Conclusion
AI represents both a transformative opportunity and a profound challenge for legal systems worldwide. While the EU has pioneered comprehensive frameworks like the GDPR and AI Act, enforcement hurdles remain. The U.S., with its fragmented approach, lags behind in ensuring consistent protections. India has made significant progress through the DPDP Act, but institutional weaknesses and government exemptions undermine its effectiveness. Bangladesh risks adopting a framework that legitimizes surveillance more than it protects rights.
The comparative study demonstrates that existing data protection laws are inadequate to address AI-specific risks. Stronger, harmonized, and enforceable regulations—coupled with empowered regulators and active courts—are necessary to protect individuals from the unchecked expansion of AI. The future of privacy depends not only on robust domestic legislation but also on global cooperation to ensure that innovation is balanced with human dignity and fundamental rights.
Reference(S): (Sample Bluebook Style)
∙ Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 Apr. 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data (General Data Protection Regulation).
∙ Google Spain SL v. Agencia Española de Protección de Datos (AEPD), Case C 131/12, ECLI:EU:C:2014:317.
∙ Schrems v. Data Prot. Comm’r (Schrems I), Case C-362/14, ECLI:EU:C:2015:650. ∙ Data Prot. Comm’r v. Facebook Ireland Ltd. (Schrems II), Case C-311/18, ECLI:EU:C:2020:559.
∙ Justice K.S. Puttaswamy v. Union of India, (2017) 10 SCC 1 (India). ∙ K.S. Puttaswamy v. Union of India, (2019) 1 SCC 1 (India) (Aadhaar Case). ∙ Carpenter v. United States, 138 S. Ct. 2206 (2018) (U.S.).
∙ Federal Trade Commission v. Facebook, Inc., No. 19-cv-2184 (D.D.C. 2019). ∙ Digital Personal Data Protection Act, 2023 (India).
∙ Draft Data Protection Bill, 2023 (Bangladesh).
∙ Exec. Order No. 14110, Safe, Secure, and Trustworthy Artificial Intelligence, 88 Fed. Reg. 75191 (Oct. 30, 2023).
∙ The White House, Blueprint for an AI Bill of Rights (Oct. 2022).





