Home » Blog » Can artificial intelligence be a discriminatory employee? Rethinking equality law in the age of algorithms

Can artificial intelligence be a discriminatory employee? Rethinking equality law in the age of algorithms

Authored By: Anne-Sophie Barbe

University of Law

ABSTRACT 

Artificial intelligence (AI) has become increasingly integrated into employment processes,  from CV screening to performance evaluation. While these systems promise efficiency and  objectivity, they can unintentionally replicate or amplify existing human biases. This article explores whether the current legal framework in the United Kingdom, principally the Equality  Act 2010 and the Data Protection Act 2018, offers adequate algorithmic discrimination in  employment. Through analysis of legislation, case law, and regulatory guidance, it highlights  the legal and evidential challenges in attributing liability for AI-driven bias. The discussion  argues that UK equality law, through conceptually broad, is ill-equipped to address opaque  algorithmic systems. The article concludes that reform is necessary to clarify employer  accountability, enhance algorithmic transparency, and ensure that fairness in the workplace  evolves alongside technological innovation. 

INTRODUCTION  

Artificial intelligence is rapidly reshaping recruitment and workplace management. Employers  increasingly rely on automated systems to assess candidates, monitor performance, and even  predict future success. Tech companies promote AI as a tool of neutrality, free from human  prejudice and guided only by data. Yet, as experience has shown, data is never truly neutral.  When algorithms are trained on historical patterns, they can reproduce systemic inequalities  rather than remove them. 

A striking example occurred when Amazon abandoned its AI recruitment tool after it was found  to downgrade applications containing the word “women’s”1. This incident reveals a growing  concern: can an employer be liable for discrimination when the bias originates from a machine  rather than a person? 

This question sits the intersection of technology, ethics, and law. the UK’s Equality Act 2010 was not drafted with algorithmic decision-making in mind, and while the Data Protection Act  2018 provides safeguards for automated processing, enforcement remains limited. As the UK  government promotes innovation through its AI Regulation White Paper (2023), the need to  balance efficiency with fairness become more urgent.  

The purpose of this article is to evaluate whether existing legal protections are sufficient to  address algorithmic discrimination in employment, and to consider possible reforms that could  ensure that equality principles remain effective in the age of automation.  

RESEARCH METHODOLOGY 

This article adopts an analytical and comparative approach. It examines UK statutory and  regulatory provisions, particularly the Equality Act 2010, the Data Protection Act 2018, and  Article 22 of the UK GDPR, and reviews relevant judicial interpretation on indirect  discrimination and vicarious liability. Academic commentary and regulatory guidance from the  Information Commissioner’s Office (ICO) and the Equality and Human Rights Commission  (EHRC) are used to illustrate practical and doctrinal gaps. Comparisons are also drawn with  the European Union’s Artificial Intelligence Act 2024, which provides a more structured  regulatory framework. 

LEGAL FRAMEWORK 

The Equality Act 2010 protects individuals from discrimination on the basis of protected  characteristics such as sex, race, disability, and age2. Section 19 defines indirect discrimination as a provision, criterion or practice which, although neutral in form, places persons with a  protected characteristic at a particular disadvantage compared to others, unless it can be  objectively justified3.  

This principle can readily apply to AI systems. For instance, an algorithm trained on historical  recruitment data may disproportionately exclude female applicant if previous hiring patterns  reflected gender bias. Even if no human intends to discriminate, the outcome may still amount  to indirect discrimination under the Act.  

The Data Protection Act 2018 completements equality law by regulating automated decision making. Article 22 of the UK GDPR gives individuals the right not to be subject to a decision  based solely on automated processing that significantly affects them, as well as the right to  obtain human review4. However, this right if often poorly understood by job applicants and  seldom exercised.  

Despite these protections, there is no specific duty on employers to test or audit AI systems for  discriminatory impact before use. Current law treats algorithms as tools, leaving liability  dependent on whether an employee or employer “used” the system in a discriminatory way.  This gap highlights the limits of a framework designed for human decision-makers rather than  autonomous systems. 

JUDICIAL INTERPRETATION 

While there are no UK cases directly concerning algorithmic discrimination, existing  jurisprudence offers analogies. In Essop v Home Office (UK Border Agency), the Supreme  Court held that claimants alleging indirect discrimination do not need to explain why a  practice causes disadvantage, only that it does5. This principle could assist claimants challenging algorithmic outcomes when the internal logic of a system is inaccessible or  complex. 

Similarly, Barclays Bank plc v Various Claimants clarified that employers can be vicariously  liable for the actions of independent contractors if their activities are closely connected with  the employer’s business6. Applied analogically, an employer might be held responsible for  biased outcomes produced by third-party AI software used in its recruitment process. 

However, evidential barriers remain significant. Without access to an algorithm’s design or  data, proving discrimination becomes nearly impossible. Courts may therefore struggle to  apply traditional legal tests to machine-generated outcomes without further statutory  guidance or technical expertise. 

CRITICAL ANALYSIS 

The primary challenge in regulating AI lies in opacity and accountability. Algorithms often  function as “black boxes”, meaning even their developers cannot fully explain their  decisions7. In employment contexts, this opacity undermines both the transparency principle  under data protection law and the burden-shifting mechanism under section 136 of the  Equality Act 2010, which requires claimants to establish a prima facie case before the burden  shifts to the respondent. 

Regulators also face structural limits. The EHRC, though empowered to enforce the Equality  Act, lacks the technical capacity to audit algorithmic systems. The ICO can address data  protection violations but has no direct mandate to investigate discrimination. As a result,  enforcement responsibility is fragmented. 

From a doctrinal perspective, UK equality law remains reactive. It relies on individuals  bringing claims after harm has occurred, whereas AI bias requires proactive prevention. The  European Union’s AI Act 2024 adopts a contrasting model by classifying employment-related  AI as “high risk” and imposing mandatory bias testing, documentation, and transparency obligations.8 The UK’s “pro-innovation” regulatory strategy, by contrast, emphasises  voluntary compliance, which risks leaving systemic bias unchecked. 

RECENT DEVELOPMENTS 

Recent initiatives show growing awareness of algorithmic discrimination but limited  legislative progress. The UK Government’s AI Regulation White Paper advocates a  principles-based approach, empowering existing regulators such as the EHRC and ICO to  oversee compliance9. However, it avoids new statutory obligations, favouring flexibility over  enforceable rights. 

In 2023, the EHRC published guidance on algorithmic fairness, encouraging employers to  ensure that automated systems align with equality duties10. The ICO has also partnered with  the Alan Turing Institute to develop auditing tools for algorithmic accountability. Yet these  remain advisory rather than mandatory. 

Globally, the policy landscape is shifting. The EU’s AI Act introduces fines and liability for  discriminatory AI, while the United States’ Equal Employment Opportunity Commission  (EEOC) has begun investigating biased recruitment algorithms. The UK risks lagging behind  these jurisdictions if it fails to implement clear statutory duties addressing algorithmic bias in  employment. 

SUGGESTIONS 

To ensure that equality law remains effective in the digital workplace, several reforms should  be considered: 

  1. Amend the Equality Act 2010 to explicitly include algorithmic bias within the  definition of indirect discrimination. 
  2. Mandate algorithmic impact assessments for employers using AI in hiring or  management decisions, similar to data protection impact assessments under the UK  GDPR. 
  3. Enhance cooperation between the EHRC and the ICO, enabling joint investigations  into discriminatory algorithms. 
  4. Introduce an “AI transparency register”, requiring employers to disclose the use of  automated decision-making tools that significantly affect employment opportunities. 5. Develop judicial and regulatory expertise through training programmes, ensuring that  courts and tribunals can interpret technical evidence effectively. 

These reforms would strengthen accountability and protect workers without undermining  innovation, ensuring that technological progress aligns with the fundamental principles of  fairness and equality. 

CONCLUSION 

Artificial intelligence has the potential to transform employment, but without proper  regulation it risks entrenching inequality under the guise of efficiency. The Equality Act 2010 and Data Protection Act 2018 provide a foundation, yet they were never designed to confront  the opacity and autonomy of modern algorithms. 

If the law continues to treat AI merely as a neutral tool, it will fail to capture the new ways in  which discrimination can occur. Ensuring that fairness is built into algorithmic systems from  the outset requires proactive oversight, transparent design, and a renewed understanding of  employer responsibility. 

The challenge is not only technological but constitutional: how to uphold the right to equality  in a society increasingly governed by data. The answer lies in adapting our legal framework  to recognise that bias can exist not just in human hearts, but also in lines of code.

BIBLIOGRAPHY 

Primary Sources  

Barclays Bank Plc v Various Claimants [2020] UKSC 13 

Data Protection Act 2018 

Equality Act 2010 

Essop v Home Office (UK Border Agency) [2917] UKSC 27 

UK GDPR, art 22 

Secondary Sources  

Cary Coglianese and David Lehr, ‘Regulating by Robot: Administrative Decision-Making in  the Machine Learning Era’ (2020) 105 Georgetown Law Journal 1147 

Equality and Human Rights Commission, Guidance on Artificial Intelligence and  Algorithmic Fairness (EHRC, 2023) 

European Union, Artificial Intelligence Act (2024) 

Information Commissioner’s Office, Explaining Decisions Made with AI (ICO, 2020) 

Sandra Wachter, ‘Normative Challenges of Machine Learning in Decision-Making’ (2021)  34(2) Oxford Internet Institute Journal 

UK Government, AI Regulation White Paper (Department for Science, Innovation and  Technology, 2023)

1 Reuters, “Amazon Scraps Secret AI Recruiting Tool that showed Bias Against Women” (11 October 2018)

2 Equality Act 2010, s4 

3 Ibid s19 

4 UK GDPR, art 22 

5 Essop v Home OSice (UK Border Agency) [2017] UKSC 27

6 Barclays Bank plc v Various Claimants [2020] UKSC 13 

7 Sandra Wachter, “Normative Challenges of Machine Learning in Decision-Making” (2021) 34(2) Oxford  Internet Institute Journal

8 European Union, Artificial Intelligence Act (2024) 

9 UK Government, AI Regulation White Paper (2023) 

10 Equality and Human Rights Commission, Guidance on Artificial Intelligence and Algorithmic Fairness  (2023)

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top