Authored By: Reema basheer
Middlesex university Dubai
ABSTRACT
This article examines algorithmic bias in artificial intelligence (AI)-driven decision-making through a comparative legal analysis of the European Union, the United Kingdom, and the United Arab Emirates. Drawing on statutory provisions, case law, and institutional responses, it assesses how these jurisdictions define and regulate algorithmic discrimination. The analysis reveals divergent approaches: while the EU combines rights-based safeguards under the GDPR and AI Act, the UK emphasises innovation and sectoral oversight, and the UAE relies on ethical governance frameworks. Despite legal provisions, enforcement gaps persist due to the opacity of AI systems and the doctrinal limitations of existing non-discrimination law. The article concludes by proposing reforms to strengthen explainability, integrate assessment mechanisms, and modernise equality frameworks to better protect against algorithmic harms.
- INTRODUCTION
Artificial intelligence (AI) systems now shape decisions in policing, finance, and employment, yet their use has prompted legal scrutiny due to algorithmic bias, systematic errors that produce unfair or discriminatory outcomes. For example, a UK government report notes that “algorithmic bias can be defined in a variety of specific, technical ways, but is increasingly being used in reference to fairness and discrimination. Similarly, Dubai’s AI Ethics Guidelines describe it as “unfair prejudice” in outcomes.” In practice, these biases often arise unintentionally from biased data or flawed design, yet they can amplify social inequalities. The EU Agency for Fundamental Rights warns that such bias, often embedded in automated decisions, can reinforce structural discrimination and infringe fundamental rights. These risks are not hypothetical: in Bridges v South Wales Police [2020], the UK Court of Appeal ruled that live facial recognition technology breached privacy and equality duties due to a failure to assess demographic bias. Legislatively, the EU’s Artificial Intelligence Act (2024) imposes binding fairness obligations on high-risk AI systems, while GDPR Article 22 provides a right not to be subject to automated decisions with significant effects. The UK addresses bias through its Equality Act 2010 and UK GDPR, complemented by regulatory guidance from the Information Commissioner’s Office. The UAE’s legal framework is emerging; while the Federal Personal Data Protection Law includes the right to object to automated decisions (Art. 18), most bias-related safeguards remain policy-driven. This article applies a comparative doctrinal method to assess how the EU, UK, and UAE legally define, regulate, and respond to algorithmic bias in AI-driven decision-making.
- LEGAL FRAMEWORK
European Union
In the EU, algorithmic decision-making is mainly governed by data protection law. Article 22 of the GDPR gives individuals the right not to be subject to decisions made solely by automated means that have significant effects. The 2024 AI Act adds rules requiring fairness, transparency, and accountability in high-risk AI systems. However, scholars argue that these laws provide weak real-world protection, due to vague terms and the difficulty of understanding how algorithms work. Although the EU framework sets out clear rights, its ability to prevent bias in practice is still uncertain.
United Kingdom
The UK, post-Brexit, mirrors GDPR protections through the UK GDPR and the Data Protection Act 2018. Additionally, the Equality Act 2010 prohibits indirect discrimination, which extends to algorithmic outcomes. However, in the absence of a comprehensive AI-specific law, regulation remains decentralised and sector-driven. This fragmented structure has drawn criticism for potentially weakening oversight, particularly as the UK pursues a pro-innovation stance with minimal regulatory friction.
United Arab Emirates
The UAE’s approach to algorithmic bias combines limited legal rights with broader ethical commitments. Article 18 of Federal Decree-Law No 45 of 2021 allows individuals to object to decisions made solely by automated processing. However, binding obligations are minimal. The 2024 UAE Charter for the Development and Use of Artificial Intelligence encourage inclusive, transparent, and non-discriminatory AI systems, but lacks binding legal force. Ethical initiatives, including the Smart Dubai AI Principles, complement this framework by promoting fairness and accountability, though they remain non-enforceable.
III. Judicial Interpretation
- United Kingdom: Courts Addressing Algorithmic Bias
British courts have begun engaging with the legal implications of algorithmic bias, most notably in R (Bridges) v Chief Constable of South Wales Police, where the Court of Appeal found that the use of live facial recognition technology breached data protection standards and the Public Sector Equality Duty under the Equality Act 2010 due to inadequate safeguards and failure to consider potential discriminatory outcomes. This judgment reinforced the obligation of public authorities to proactively examine algorithmic tools for discriminatory impact, even without direct evidence. A similar challenge arose in 2020, when the Home Office withdrew its visa streaming algorithm after civil society organisations claimed it racially profiled applicants based on nationality. Though settled pre-judgment, the case demonstrated how equality and data protection law can be used to contest discriminatory AI in public administration. These developments illustrate a judicial willingness to apply existing legal duties to algorithmic systems, even as doctrinal clarity remains limited.
- European Union: Institutional Recognition Without Doctrinal Clarity
Although the Court of Justice of the European Union has yet to pronounce decisively on algorithmic bias, existing legal frameworks provide indirect avenues for redress. The EU’s Equality Directives prohibit indirect discrimination, a concept that scholars such as Zuiderveen Borgesius argue is particularly apt for identifying discriminatory outcomes produced by ostensibly neutral algorithms. Article 22 of the General Data Protection Regulation offers individuals the right not to be subjected to decisions based solely on automated processing with significant effects, but its practical enforcement remains limited due to evidentiary burdens and systemic opacity. Institutional reports from the EU Agency for Fundamental Rights acknowledge the risk of discrimination arising from algorithmic systems, particularly in welfare, law enforcement, and employment contexts. The Dutch tax authority’s use of biased fraud detection algorithms and the invalidation of the SyRI welfare profiling system by Dutch courts illustrate the real-world consequences of algorithmic bias and the growing willingness of national institutions to intervene. However, as Kuśmierczyk and others observe, the reliance on broad legal principles such as fairness and human oversight may provide insufficient safeguards, underscoring the need for more tailored regulatory mechanisms.
- United Arab Emirates: A Policy-Driven Approach in the Absence of Case Law
The UAE has yet to see judicial interpretation directly addressing algorithmic bias, and no reported court rulings have tested the application of anti-discrimination statutes to AI systems. Nonetheless, its legislative framework contains potentially applicable provisions. The 2015 Anti-Discrimination Law criminalises discriminatory acts on the basis of race, religion, caste, or ethnicity, and the 2021 Labour Law prohibits employment discrimination on multiple protected grounds. Although these laws do not reference algorithms, legal commentary suggests they would extend to algorithmic systems that produce biased outcomes. In practice, the country has focused on soft law governance and executive oversight. The National AI Strategy 2031 and the Ethical AI Toolkit, issued by Smart Dubai, set out principles of fairness, explainability, and accountability, with regulators emphasising preventive oversight. Agencies such as the Ministry of AI and digital authorities in Dubai and Abu Dhabi require bias assessments for public-facing systems. To date, however, there are no reported judicial enforcements or litigation concerning algorithmic bias, indicating the UAE’s approach remains policy-led rather than adjudicative.
- ASSESSING THE ADEQUACY OF LEGAL PROTECTIONS
The opacity of algorithmic systems, often described as “black boxes”, poses a major challenge to legal oversight. Krištofík argues that machine-learning algorithms evolve in ways that resist traditional procedural scrutiny, undermining accountability in high-stakes decisions. Although Article 22 GDPR and the EU AI Act both mandate human oversight and transparency for high-risk systems, these duties often translate into internal documentation rather than meaningful explanations for individuals. Consequently, the right to human intervention has become what some commentators call a “second-class right.”
Enforcement gaps persist in anti-discrimination law as well. Xenidis highlights that EU non-discrimination law was not built for algorithmic contexts and suffers from doctrinal and procedural limitations. Similarly, while the UAE’s Federal Anti-Discrimination Law explicitly includes algorithmic bias, its practical implementation remains weak. Ethical guidelines like Dubai’s AI Principles encourage fairness, but as soft-law instruments, they lack binding force.
Comparatively, the UK follows a more innovation-friendly model, assigning sectoral regulators oversight powers under a principles-based framework. However, critics argue that this results in fragmentation and lacks enforceable safeguards. While the EU favours rights-based regulation and the US promote innovation, both regimes exhibit enforcement gaps, especially with respect to opaque systems.
- RECENT DEVELOPMENTS
The European Union’s Artificial Intelligence Act, in force since August 2024, introduces binding obligations on providers of high-risk AI systems in domains such as employment and credit scoring, but its overlap with existing regimes like the GDPR and DSA has prompted calls for harmonised compliance mechanisms. The European Parliament’s 2025 study critiques the Act’s “product-safety” overlay and highlights the potential burden of parallel assessments under different authorities. Meanwhile, the United Kingdom has advanced a regulator-led strategy: the ICO’s 2023 update to its AI guidance stresses fairness and transparency,³ while the DRCF’s 2024 cross-regulatory hub reflects a collaborative model to facilitate ethical AI innovation. In the UAE, the 2024 Charter for the Development and Use of AI prioritises algorithmic fairness and human oversight, supported by local ethical initiatives such as Smart Dubai’s AI Guidelines and Digital Dubai’s voluntary bias-assessment tools. Notably, the 2023 federal Anti-Discrimination Law explicitly extends to algorithmic bias, underscoring a gradual shift from principle-based governance to enforceable legal norms.
- RECOMMENDATIONS
To address regulatory shortcomings in tackling algorithmic bias, the following legal reforms are proposed:
- Codify meaningful explainability. Legal frameworks should require that individuals receive intelligible, post-hoc explanations for automated decisions with significant effects, rather than relegating transparency to internal documentation. This would enhance Article 22 GDPR protections and fulfil transparency duties under the AI Act.
- Clarify “significant effect.” The ambiguity surrounding the term “legal or similarly significant effects” in both the GDPR and AI Act should be resolved through legislative clarification. Narrow interpretation focused on materially consequential decisions, such as denial of credit, employment, or access to services, would help operationalise safeguards.
- Integrate assessment regimes. Regulators should streamline the GDPR’s Data Protection Impact Assessments (DPIAs) and the AI Act’s Fundamental Rights Impact Assessments (FRIAs). Harmonised templates or mutual recognition mechanisms would reduce administrative burdens and promote consistent evaluations.
- Institutionalise oversight and audits. The AI Act’s provisions on human oversight should be reinforced by requiring regular independent audits, dataset reviews, and effective complaint mechanisms. Meaningful intervention must go beyond formalistic inclusion.
- Modernise equality law. Anti-discrimination statutes should be updated to account for algorithmic proxies and structural bias. Measures might include shifting the burden of proof in automated contexts and mandating algorithmic bias audits.
- Harmonise regulatory frameworks. Enhanced coordination between sectoral regulators (e.g. data protection, competition, consumer authorities) is essential. European Parliament analysts recommend joint guidance and share “regulatory sandboxes” to reduce fragmentation.
VII. EXECUTIVE SUMMARY
This article evaluates the legal responses to algorithmic bias in the European Union, the United Kingdom, and the United Arab Emirates. While the EU has developed rights-based mechanisms under the GDPR and the 2024 AI Act, enforcement remains patchy due to technical opacity and conceptual gaps. The UK relies on existing frameworks like the Equality Act 2010 and UK GDPR, with courts beginning to apply these laws to AI, as in Bridges v South Wales Police, though oversight is decentralised. The UAE’s approach is primarily policy-driven, with limited binding obligations and no reported case law, despite ethical frameworks and strategic guidance.
The analysis identifies systemic shortcomings, particularly around transparency, discrimination law, and regulatory coordination, and proposes targeted reforms to improve explainability, strengthen oversight, and modernise legal protections against AI-driven discrimination.
Bibliography
PRIMARY SOURCES
Legislation
Data Protection Act 2018
Equality Act 2010
Federal Decree-Law No 2 of 2015 on Combating Discrimination and Hatred (UAE)
Federal Decree-Law No 33 of 2021 on the Regulation of Labour Relations (UAE)
Federal Decree-Law No 45 of 2021 on the Protection of Personal Data (UAE)
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data [2016] OJ L119/1 (General Data Protection Regulation)
Cases
R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058
Rechtbank Den Haag (District Court of The Hague), 5 February 2020, ECLI:NL:RBDHA:2020:865
SECONDARY SOURCES
Books and Book Chapters
R Xenidis, ‘When Computers Say No: Towards a Legal Response to Algorithmic Discrimination in Europe’ in M Corrales Compagnucci and others (eds), Research Handbook on Law and Technology (Edward Elgar 2023)
Journal Articles
A Krištofík, ‘Bias in AI (Supported) Decision Making: Old Problems, New Technologies’ (2025) 16(1) International Journal for Court Administration 1
H Roberts and others, ‘Artificial Intelligence Regulation in the United Kingdom: A Path to Good Governance and Global Leadership?’ (2023) 12(2) Internet Policy Review
GF Lendvai and G Gosztonyi, ‘Algorithmic Bias as a Core Legal Dilemma in the Age of Artificial Intelligence: Conceptual Basis and the Current State of Regulation’ (2025) 14(3) Laws 41
M Kuśmierczyk, ‘Algorithmic Bias in the Light of the GDPR and the Proposed AI Act’ (SSRN, 8 May 2022) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4117936 accessed 25 November 2025
Reports and Institutional Documents
EU Agency for Fundamental Rights, Bias in Algorithms – Artificial Intelligence and Discrimination (2022)
FJZ Borgesius, Discrimination, Artificial Intelligence, and Algorithmic Decision-Making (Council of Europe 2018)
Hans Graux and others, Interplay Between the AI Act and the EU Digital Legislative Framework (European Parliament, Study for the ITRE Committee, October 2025)
Information Commissioner’s Office, Guidance on AI and Data Protection (updated March 2023)
Web Sources
Ada Lovelace Institute, ‘Facial Recognition Technology Needs Proper Regulation’ (Ada Lovelace Institute Blog, 28 July 2020) https://www.adalovelaceinstitute.org/blog/facial-recognition-technology-needs-proper-regulation accessed 27 November 2025
Chambers and Partners, Artificial Intelligence 2025 – UAE (Chambers Global Practice Guide) https://practiceguides.chambers.com/practice-guides/artificial-intelligence-2025/uae accessed 27 November 2025
Statewatch, ‘UK: Threat of Legal Challenge Forces Home Office to Abandon “Racist Visa Algorithm”’ (4 August 2020) https://www.statewatch.org/news/2020/august/uk-threat-of-legal-challenge-forces-home-office-to-abandon-racist-visa-algorithm accessed 27 November 2025
Smart Dubai, AI Ethics Principles and Guidelines (2019) https://www.digitaldubai.ae/initiatives/ai-principles accessed 24 November 2025
UAE Government, UAE Charter for the Development and Use of Artificial Intelligence (2024) https://uaelegislation.gov.ae/en/policy/details/the-uae-charter-for-the-development-and-use-of-artificial-intelligence accessed 27 November 2025





