Authored By: Sara Karim
Middlesex University
Introduction
Artificial Intelligence (AI) and algorithmic systems are reshaping how societies make decisions—from hiring and policing to healthcare and credit scoring. While these systems promise speed and efficiency, their deployment often comes at the cost of fairness and accountability. A pressing concern is the role AI can play in perpetuating or amplifying discrimination, either through biased training data or by using proxies that correlate with protected characteristics such as race or gender. Discrimination in law is often categorised as direct, explicit and intentional, or indirect, where neutral criteria disproportionately disadvantage certain groups. Algorithms may not “intend” to discriminate, but they can reproduce historical patterns of exclusion on a massive scale. This creates new challenges for legal systems, which must adapt principles of fairness, transparency, and accountability to the age of automation. This essay compares how the United Kingdom (UK) and the United States (US), two influential common law jurisdictions, address the intersection of artificial intelligence (AI) and discrimination law. The UK operates within a cohesive equality and data protection framework, while the US relies on civil rights statutes and enforcement by regulatory agencies. By examining key laws, regulatory strategies, and case examples, this essay evaluates the strengths and limitations of each system and proposes reforms to close accountability gaps.
Legal Framework in the United Kingdom
The primary anti-discrimination statute in the UK is the Equality Act 2010, which prohibits both direct and indirect discrimination across protected characteristics, including age, race, sex, disability, and religion.1 These protections apply in employment, education, and the provision of goods and services. Notably, the Act’s language is broad enough to include algorithmic practices that lead to discriminatory outcomes, even where no human actor intends harm. In addition, the UK General Data Protection Regulation (UK GDPR) provides further safeguards. Article 22 restricts decisions “based solely on automated processing” that significantly affect individuals, such as job rejections or credit scoring.2It also gives individuals the right to human review, the ability to contest decisions, and to seek explanation mechanisms crucial for combating algorithmic opacity. Public bodies are also subject to the Public Sector Equality Duty (PSED) under section 149 of the Equality Act. This duty requires public authorities to consider how their policies, including algorithmic tools, affect disadvantaged groups.3 The Equality and Human Rights Commission (EHRC) recommends conducting Equality Impact Assessments (EqIAs) before deploying AI in public services.4 Despite these frameworks, enforcement in practice remains limited. Much of the regulation is complaint-driven and lacks proactive auditing. Additionally, private sector algorithms—such as those used by hiring platforms often fall outside meaningful scrutiny due to commercial secrecy or lack of transparency.5In response to these challenges, the UK government published its AI Regulation White Paper in 2023, proposing a sector-specific, pro-innovation approach.6 While the paper acknowledges discrimination risks, it stops short of introducing binding algorithmic accountability mechanisms or an independent AI regulator.
Legal Framework in the United States
The US lacks a unified discrimination statute like the UK’s Equality Act. Instead, it relies on a patchwork of federal and state laws, the most prominent being Title VII of the Civil Rights Act of 1964, which prohibits employment discrimination based on race, colour, religion, sex, or national origin.7 Other relevant statutes include the Americans with Disabilities Act (ADA) and the Fair Housing Act (FHA). The Equal Employment Opportunity Commission (EEOC) plays a leading role in enforcement. In 2021, it launched an initiative on Artificial Intelligence and Algorithmic Fairness, focusing on employment-related AI tools.8 A well-known example is Amazon’s AI recruiting tool, which was scrapped after it was found to penalise female candidates based on historical hiring data.9 Unlike the UK, the US does not guarantee a right to explanation or human intervention for automated decisions. Legal accountability often depends on affected individuals bringing claims through litigation. This makes redress difficult, especially when claimants lack technical knowledge of how the algorithm operates. There are emerging legislative efforts. The Algorithmic Accountability Act, reintroduced in 2022, would require large companies to conduct impact assessments of high-risk automated decision systems.10 Meanwhile, in 2023, President Biden signed an Executive Order on Safe, Secure, and Trustworthy AI, directing federal agencies to issue rules preventing algorithmic discrimination in housing, health, and employment.11 Despite strong civil rights traditions and agency enforcement, the US framework remains fragmented. State-level laws, like Illinois’ Biometric Information Privacy Act, provide some additional protections, but the absence of a national AI law limits consistency and comprehensiveness.
Comparative Evaluation
Both countries acknowledge the risks of algorithmic bias but diverge in their legal strategies. The UK provides broader statutory protection. The Equality Act, GDPR Article 22, and PSED create a legal environment supportive of rights-based claims. However, actual enforcement is reactive and depends heavily on individual complaints. While the EHRC encourages best practices, it lacks the teeth to compel private actors to disclose or audit AI systems proactively. The US, by contrast, excels in enforcement. The EEOC, FTC, and state attorneys general have strong investigatory powers. However, the system is litigation-driven and reactive, with no general right to explanation or contestation. Additionally, the reliance on state laws creates regulatory inconsistency across jurisdictions. Transparency is a common weakness. In the UK, Article 22 provides a nominal right to explanation, but enforcement and clarity are weak.12 The US offers no federal equivalent, and impacted individuals may never know that an algorithm was involved in a decision. Overall, the UK benefits from a cohesive legislative base but lacks effective regulatory muscle. The US, while strong in enforcement, needs a national framework to ensure algorithmic accountability beyond case-by-case litigation.
Conclusion
AI poses a significant challenge to anti-discrimination law. Both the UK and the US have begun addressing these risks, but neither has yet created a comprehensive, enforceable regime to ensure fairness in algorithmic decision-making. The UK’s legislative coherence, especially under the Equality Act and GDPR, offers a model for defining legal duties. However, proactive enforcement mechanisms are weak. The US system benefits from active litigation and regulatory bodies but suffers from fragmentation and a lack of individual procedural rights. To improve protections, both countries should consider: Mandating algorithmic impact assessments, Strengthening rights to explanation and contestation, and Creating independent regulators for algorithmic fairness. Without such reforms, both jurisdictions risk allowing AI systems to perpetuate existing inequalities under the guise of technological neutrality.
Bibliography
- Legislation and Government Documents
- Civil Rights Act of 1964, Title VII, 42 USC §2000e
- Equality Act 2010
- General Data Protection Regulation (UK GDPR), Regulation (EU) 2016/679, as retained in UK law
- Algorithmic Accountability Act 2022, H.R.6580
- Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (30 October 2023)
- Department for Science, Innovation and Technology, A pro-innovation approach to AI regulation (White Paper, March 2023)
- Equality and Human Rights Commission, AI, Big Data and Democracy (EHRC 2018)
- Equal Employment Opportunity Commission, Artificial Intelligence and Algorithmic Fairness Initiative (EEOC 2021)
- Books and Journal Articles
- Lilian Edwards and Michael Veale, ‘Slave to the Algorithm? Why a Right to Explanation Is Probably Not the Remedy You Are Looking For’ (2017) 16(1) Duke Law & Technology Review 18
- Sandra Wachter, Brent Mittelstadt and Chris Russell, ‘Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI’ (2021) 41(1) Oxford Journal of Legal Studies 21
- News Articles and Other Online Sources
- Jeffrey Dastin, ‘Amazon scraps secret AI recruiting tool that showed bias against women’, Reuters (10 October 2018)
1 Equality Act 2010, ss 4–19.
2 UK GDPR, Art 22.
3 Equality Act 2010, s 149.
4 Equality and Human Rights Commission, ‘AI, big data and democracy’ (2018).
5 Sandra Wachter, Brent Mittelstadt and Chris Russell, ‘Why Fairness Cannot Be Automated’ (2021) 41(1) Oxford Journal of Legal Studies 21.
6 Department for Science, Innovation and Technology, ‘A pro-innovation approach to AI regulation’ (White Paper, March 2023).
7 Civil Rights Act of 1964, Title VII, 42 USC §2000e.
8 Equal Employment Opportunity Commission, ‘Artificial Intelligence and Algorithmic Fairness Initiative’ (2021).
9 Jeffrey Dastin, ‘Amazon scraps secret AI recruiting tool that showed bias against women’ Reuters (10 October 2018).
10 Algorithmic Accountability Act 2022, H.R.6580.
11 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (30 October 2023).
12 Lilian Edwards and Michael Veale, ‘Slave to the Algorithm? Why a Right to Explanation Is Probably Not the Remedy You Are Looking For’ (2017) 16(1) Duke Law & Technology Review 18.