Home » Blog » Algorithmic Bias in AI-Driven Decision Making: A Comparative Legal Analysis of the UAE UK and EU

Algorithmic Bias in AI-Driven Decision Making: A Comparative Legal Analysis of the UAE UK and EU

Authored By: Reema basheer

Middlesex University Dubai

ABSTRACT

This article examines algorithmic bias in artificial intelligence (AI)-driven decision-making through a comparative legal analysis of the European Union, the United Kingdom, and the United Arab Emirates. Drawing on statutory provisions, case law, and institutional responses, it assesses how these jurisdictions define and regulate algorithmic discrimination. The analysis reveals divergent approaches: while the EU combines rights-based safeguards under the GDPR and AI Act, the UK emphasises innovation and sectoral oversight, and the UAE relies on ethical governance frameworks. Despite legal provisions, enforcement gaps persist due to the opacity of AI systems and the doctrinal limitations of existing non-discrimination law. The article concludes by proposing reforms to strengthen explainability, integrate assessment mechanisms, and modernise equality frameworks to better protect against algorithmic harms.

I. INTRODUCTION

Artificial intelligence (AI) systems now shape decisions in policing, finance, and employment, yet their use has prompted legal scrutiny due to algorithmic bias—systematic errors that produce unfair or discriminatory outcomes. A UK government report notes that “algorithmic bias can be defined in a variety of specific, technical ways, but is increasingly being used in reference to fairness and discrimination.”¹ Similarly, Dubai’s AI Ethics Guidelines describe it as “unfair prejudice” in outcomes.² In practice, these biases often arise unintentionally from biased data or flawed design, yet they can amplify social inequalities. The EU Agency for Fundamental Rights warns that such bias, often embedded in automated decisions, can reinforce structural discrimination and infringe fundamental rights.³

These risks are not hypothetical. In R (Bridges) v Chief Constable of South Wales Police [2020],⁴ the UK Court of Appeal ruled that live facial recognition technology breached privacy and equality duties due to a failure to assess demographic bias. Legislatively, the EU’s Artificial Intelligence Act (2024) imposes binding fairness obligations on high-risk AI systems, while GDPR Article 22 provides a right not to be subject to automated decisions with significant effects. The UK addresses bias through its Equality Act 2010 and UK GDPR, complemented by regulatory guidance from the Information Commissioner’s Office. The UAE’s legal framework is emerging; while the Federal Personal Data Protection Law includes the right to object to automated decisions (Article 18), most bias-related safeguards remain policy-driven.

This article applies a comparative doctrinal method to assess how the EU, UK, and UAE legally define, regulate, and respond to algorithmic bias in AI-driven decision-making. Part II examines the legislative frameworks governing algorithmic bias in each jurisdiction. Part III analyses how courts and institutions have interpreted these frameworks. Part IV assesses the adequacy of existing legal protections. Part V reviews recent regulatory developments, and Part VI proposes reforms to address identified shortcomings.

II. LEGAL FRAMEWORK

A. European Union

In the EU, algorithmic decision-making is primarily governed by data protection law. Article 22 of the GDPR grants individuals the right not to be subject to decisions based solely on automated means that produce significant legal or similarly significant effects. The 2024 AI Act supplements this framework by imposing requirements of fairness, transparency, and accountability on high-risk AI systems. However, scholars argue that these laws provide limited real-world protection due to vague terminology and the inherent difficulty of understanding how algorithms operate.⁵ Although the EU framework establishes clear rights on paper, its capacity to prevent bias in practice remains uncertain.

B. United Kingdom

Post-Brexit, the UK mirrors GDPR protections through the UK GDPR and the Data Protection Act 2018. Additionally, the Equality Act 2010 prohibits indirect discrimination, a principle that extends to algorithmic outcomes. However, in the absence of comprehensive AI-specific legislation, regulation remains decentralised and sector-driven. This fragmented structure has drawn criticism for potentially weakening oversight, particularly as the UK pursues a pro-innovation stance with minimal regulatory friction.⁶

C. United Arab Emirates

The UAE’s approach to algorithmic bias combines limited legal rights with broader ethical commitments. Article 18 of Federal Decree-Law No 45 of 2021 allows individuals to object to decisions made solely by automated processing. However, binding obligations remain minimal. The 2024 UAE Charter for the Development and Use of Artificial Intelligence encourages inclusive, transparent, and non-discriminatory AI systems but lacks binding legal force. Ethical initiatives, including the Smart Dubai AI Principles, complement this framework by promoting fairness and accountability, though they remain non-enforceable.

III. JUDICIAL INTERPRETATION

A. United Kingdom: Courts Addressing Algorithmic Bias

British courts have begun engaging with the legal implications of algorithmic bias, most notably in R (Bridges) v Chief Constable of South Wales Police,⁷ where the Court of Appeal found that the use of live facial recognition technology breached data protection standards and the Public Sector Equality Duty under the Equality Act 2010 due to inadequate safeguards and failure to consider potential discriminatory outcomes. This judgment reinforced the obligation of public authorities to proactively examine algorithmic tools for discriminatory impact, even absent direct evidence of harm.

A similar challenge arose in 2020 when the Home Office withdrew its visa streaming algorithm after civil society organisations claimed it racially profiled applicants based on nationality.⁸ Though settled before judgment, the case demonstrated how equality and data protection law can be deployed to contest discriminatory AI in public administration. These developments illustrate judicial willingness to apply existing legal duties to algorithmic systems, even as doctrinal clarity remains limited.

B. European Union: Institutional Recognition Without Doctrinal Clarity

Although the Court of Justice of the European Union has yet to pronounce decisively on algorithmic bias, existing legal frameworks provide indirect avenues for redress. The EU’s Equality Directives prohibit indirect discrimination, a concept that scholars such as Zuiderveen Borgesius argue is particularly apt for identifying discriminatory outcomes produced by ostensibly neutral algorithms.⁹ Article 22 of the General Data Protection Regulation offers individuals the right not to be subjected to decisions based solely on automated processing with significant effects, but its practical enforcement remains limited due to evidentiary burdens and systemic opacity.

Institutional reports from the EU Agency for Fundamental Rights acknowledge the risk of discrimination arising from algorithmic systems, particularly in welfare, law enforcement, and employment contexts. The Dutch tax authority’s use of biased fraud detection algorithms and the invalidation of the SyRI welfare profiling system by Dutch courts¹⁰ illustrate the real-world consequences of algorithmic bias and the growing willingness of national institutions to intervene. However, as Kuśmierczyk observes, the reliance on broad legal principles such as fairness and human oversight may provide insufficient safeguards, underscoring the need for more tailored regulatory mechanisms.¹¹

C. United Arab Emirates: A Policy-Driven Approach in the Absence of Case Law

The UAE has yet to see judicial interpretation directly addressing algorithmic bias, and no reported court rulings have tested the application of anti-discrimination statutes to AI systems. Nonetheless, its legislative framework contains potentially applicable provisions. The 2015 Anti-Discrimination Law criminalises discriminatory acts on the basis of race, religion, caste, or ethnicity, and the 2021 Labour Law prohibits employment discrimination on multiple protected grounds. Although these laws do not explicitly reference algorithms, legal commentary suggests they would extend to algorithmic systems that produce biased outcomes.

In practice, the country has focused on soft law governance and executive oversight. The National AI Strategy 2031 and the Ethical AI Toolkit, issued by Smart Dubai, set out principles of fairness, explainability, and accountability, with regulators emphasising preventive oversight. Agencies such as the Ministry of AI and digital authorities in Dubai and Abu Dhabi require bias assessments for public-facing systems. To date, however, there have been no reported judicial enforcements concerning algorithmic bias, indicating that the UAE’s approach remains policy-led rather than adjudicative.

IV. ASSESSING THE ADEQUACY OF LEGAL PROTECTIONS

The opacity of algorithmic systems, often described as “black boxes,” poses a major challenge to legal oversight. Krištofík argues that machine-learning algorithms evolve in ways that resist traditional procedural scrutiny, undermining accountability in high-stakes decisions.¹² Although Article 22 of the GDPR and the EU AI Act both mandate human oversight and transparency for high-risk systems, these duties often translate into internal documentation rather than meaningful explanations accessible to individuals. Consequently, the right to human intervention has become what some commentators describe as a “second-class right.”¹³

Enforcement gaps persist in anti-discrimination law as well. Xenidis highlights that EU non-discrimination law was not designed for algorithmic contexts and suffers from doctrinal and procedural limitations.¹⁴ Similarly, while the UAE’s Federal Anti-Discrimination Law has been interpreted to include algorithmic bias, its practical implementation remains weak. Ethical guidelines such as Dubai’s AI Principles encourage fairness but, as soft-law instruments, they lack binding force.

Comparatively, the UK follows a more innovation-friendly model, assigning sectoral regulators oversight powers under a principles-based framework. However, critics argue that this approach results in fragmentation and lacks enforceable safeguards. Both the EU’s rights-based regulation and innovation-oriented approaches exhibit enforcement gaps, especially with respect to opaque systems.

V. RECENT DEVELOPMENTS

The European Union’s Artificial Intelligence Act, in force since August 2024, introduces binding obligations on providers of high-risk AI systems in domains such as employment and credit scoring. However, its overlap with existing regimes such as the GDPR and Digital Services Act has prompted calls for harmonised compliance mechanisms. The European Parliament’s 2025 study critiques the Act’s “product-safety” overlay and highlights the potential burden of parallel assessments under different authorities.¹⁵

Meanwhile, the United Kingdom has advanced a regulator-led strategy. The Information Commissioner’s Office’s 2023 update to its AI guidance stresses fairness and transparency, while the Digital Regulation Cooperation Forum’s 2024 cross-regulatory hub reflects a collaborative model to facilitate ethical AI innovation. In the UAE, the 2024 Charter for the Development and Use of AI prioritises algorithmic fairness and human oversight, supported by local ethical initiatives such as Smart Dubai’s AI Guidelines and Digital Dubai’s voluntary bias-assessment tools. These developments underscore a gradual shift from principle-based governance toward enforceable legal norms.

VI. RECOMMENDATIONS

To address regulatory shortcomings in tackling algorithmic bias, the following legal reforms are proposed:

  1. Codify meaningful explainability. Legal frameworks should require that individuals receive intelligible, post-hoc explanations for automated decisions with significant effects, rather than relegating transparency to internal documentation. This would enhance Article 22 GDPR protections and fulfil transparency duties under the AI Act.

  2. Clarify “significant effect.” The ambiguity surrounding the term “legal or similarly significant effects” in both the GDPR and AI Act should be resolved through legislative clarification. A narrower interpretation focused on materially consequential decisions—such as denial of credit, employment, or access to services—would help operationalise safeguards.

  3. Integrate assessment regimes. Regulators should streamline the GDPR’s Data Protection Impact Assessments (DPIAs) and the AI Act’s Fundamental Rights Impact Assessments (FRIAs). Harmonised templates or mutual recognition mechanisms would reduce administrative burdens and promote consistent evaluations.

  4. Institutionalise oversight and audits. The AI Act’s provisions on human oversight should be reinforced by requiring regular independent audits, dataset reviews, and effective complaint mechanisms. Meaningful intervention must extend beyond formalistic inclusion.

  5. Modernise equality law. Anti-discrimination statutes should be updated to account for algorithmic proxies and structural bias. Measures might include shifting the burden of proof in automated contexts and mandating algorithmic bias audits.

  6. Harmonise regulatory frameworks. Enhanced coordination between sectoral regulators (e.g., data protection, competition, and consumer authorities) is essential. European Parliament analysts recommend joint guidance and shared “regulatory sandboxes” to reduce fragmentation.¹⁶

VII. CONCLUSION

This article has evaluated the legal responses to algorithmic bias in the European Union, the United Kingdom, and the United Arab Emirates. While the EU has developed rights-based mechanisms under the GDPR and the 2024 AI Act, enforcement remains inconsistent due to technical opacity and conceptual gaps. The UK relies on existing frameworks such as the Equality Act 2010 and UK GDPR, with courts beginning to apply these laws to AI systems, as demonstrated in Bridges v South Wales Police, though oversight remains decentralised. The UAE’s approach is primarily policy-driven, with limited binding obligations and no reported case law, despite comprehensive ethical frameworks and strategic guidance.

The analysis has identified systemic shortcomings, particularly regarding transparency, the adaptability of discrimination law, and regulatory coordination. The proposed reforms aim to improve explainability, strengthen oversight mechanisms, and modernise legal protections against AI-driven discrimination. As algorithmic systems become increasingly embedded in decision-making processes, legal frameworks must evolve to ensure that technological advancement does not come at the expense of fundamental rights and equality.

FOOTNOTE(S):

  1. [UK Government Report on Algorithmic Bias – citation to be verified]
  2. Smart Dubai, AI Ethics Principles and Guidelines (2019) https://www.digitaldubai.ae/initiatives/ai-principles accessed 24 November 2025.
  3. EU Agency for Fundamental Rights, Bias in Algorithms – Artificial Intelligence and Discrimination (2022).
  4. [2020] EWCA Civ 1058.
  5. M Kuśmierczyk, ‘Algorithmic Bias in the Light of the GDPR and the Proposed AI Act’ (SSRN, 8 May 2022) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4117936 accessed 25 November 2025.
  6. H Roberts and others, ‘Artificial Intelligence Regulation in the United Kingdom: A Path to Good Governance and Global Leadership?’ (2023) 12(2) Internet Policy Review.
  7. [2020] EWCA Civ 1058.
  8. Statewatch, ‘UK: Threat of Legal Challenge Forces Home Office to Abandon “Racist Visa Algorithm”‘ (4 August 2020) https://www.statewatch.org/news/2020/august/uk-threat-of-legal-challenge-forces-home-office-to-abandon-racist-visa-algorithm accessed 27 November 2025.
  9. FJZ Borgesius, Discrimination, Artificial Intelligence, and Algorithmic Decision-Making (Council of Europe 2018).
  10. Rechtbank Den Haag (District Court of The Hague), 5 February 2020, ECLI:NL:RBDHA:2020:865.
  11. Kuśmierczyk (n 5).
  12. A Krištofík, ‘Bias in AI (Supported) Decision Making: Old Problems, New Technologies’ (2025) 16(1) International Journal for Court Administration 1.
  13. [Source to be verified]
  14. R Xenidis, ‘When Computers Say No: Towards a Legal Response to Algorithmic Discrimination in Europe’ in M Corrales Compagnucci and others (eds), Research Handbook on Law and Technology (Edward Elgar 2023).
  15. Hans Graux and others, Interplay Between the AI Act and the EU Digital Legislative Framework (European Parliament, Study for the ITRE Committee, October 2025).
  16. ibid.

BIBLIOGRAPHY

PRIMARY SOURCES

Legislation

  • Data Protection Act 2018
  • Equality Act 2010
  • Federal Decree-Law No 2 of 2015 on Combating Discrimination and Hatred (UAE)
  • Federal Decree-Law No 33 of 2021 on the Regulation of Labour Relations (UAE)
  • Federal Decree-Law No 45 of 2021 on the Protection of Personal Data (UAE)
  • Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data [2016] OJ L119/1 (General Data Protection Regulation)

Cases

  • R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058
  • Rechtbank Den Haag (District Court of The Hague), 5 February 2020, ECLI:NL:RBDHA:2020:865

SECONDARY SOURCES

Books and Book Chapters

  • Xenidis R, ‘When Computers Say No: Towards a Legal Response to Algorithmic Discrimination in Europe’ in Corrales Compagnucci M and others (eds), Research Handbook on Law and Technology (Edward Elgar 2023)

Journal Articles

  • Krištofík A, ‘Bias in AI (Supported) Decision Making: Old Problems, New Technologies’ (2025) 16(1) International Journal for Court Administration 1
  • Lendvai GF and Gosztonyi G, ‘Algorithmic Bias as a Core Legal Dilemma in the Age of Artificial Intelligence: Conceptual Basis and the Current State of Regulation’ (2025) 14(3) Laws 41
  • Roberts H and others, ‘Artificial Intelligence Regulation in the United Kingdom: A Path to Good Governance and Global Leadership?’ (2023) 12(2) Internet Policy Review

Working Papers

Reports and Institutional Documents

  • Borgesius FJZ, Discrimination, Artificial Intelligence, and Algorithmic Decision-Making (Council of Europe 2018)
  • EU Agency for Fundamental Rights, Bias in Algorithms – Artificial Intelligence and Discrimination (2022)
  • Graux H and others, Interplay Between the AI Act and the EU Digital Legislative Framework (European Parliament, Study for the ITRE Committee, October 2025)
  • Information Commissioner’s Office, Guidance on AI and Data Protection (updated March 2023)

Web Sources

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top