Home » Blog » AI IN THE HIRING AND HR DECISIONS: DISCRIMINATION RISKS UNDERTHE EQUALITY ACT 2010

AI IN THE HIRING AND HR DECISIONS: DISCRIMINATION RISKS UNDERTHE EQUALITY ACT 2010

Authored By: Lamisha Hasan

ABSTRACT 

Artificial intelligence (AI) is heavily used in recruitment and HR decision making; however, it exhibits considerable discrimination risks under the Equality Act 2010. Even though the act  existed before AI became popular, the provisions still apply fully to the decision-making on  direct and indirect discrimination. This article aims to examine the legal consequences of AI mediated hiring, judicial principles and gaps in the current UK regulation. It argues that while  the Equality Act is competent of regulating discriminatory AI outputs and practical barriers  such as lack of clarity, difficulty getting evidence and issues with holding AI companies  responsible, it still limits enforcement. The article proposes reforms, impact assessments and  clearer frameworks to ensue fairness in an AI- driven labour market.  

INTRODUCTION 

AI has rapidly transformed recruitment through automated screening of CVs, psychometric  testing, predictive scoring and facial analysis tools. Employers adopt AI for efficiency and  consistency, yet there is so many global risks associated with AI. Amazon withdrew its  programmed recruiting tool after finding that it systematically downgraded CVs containing  female associated terms which created a biased environment.1 

In the UK, the Equality Act 2010 controls discriminatory behaviour in employment, despite of whether decisions are made by humans or systems. Since there is an increase in relying on  AI, it is essential to analyse how it affects the rules set out in the Equality Act. This article  examines the risks involved with using AI in recruitment; analyses relevant legal principles  and recommends reforms.  

RESEARCH METHODOLOGY 

This article uses a doctrinal and analytical approach, relying on statutes, case law and  regulatory guidance. Secondary sources include academic commentary, policy papers and  comparative frameworks.  

MAIN BODY 

LEGAL FRAMEWORK UNDER THE EQUALITY ACT 2010 

The Equality Act 2010 inhibits direct discrimination2 ,indirect discrimination3, harassment4 and victimisation. Indirect discrimination is an important element in algorithm bias; a  provision, criterion or practice (PCP), including automatic scoring which may disadvantage  protected groups unless it can be justified. This is due to the fact that algorithms heavily rely  on historical data, thus biased outcomes are more likely to rise without intent. Further to this,  in the Equality Act 2010, section 39 makes employers responsible for inequitable treatment in  recruitment. Courts have held that employers must be held liable for flawed automated systems. In ‘Weeks v Commissioner of the Metropolis’5, it was confirmed that employers  remain responsible for unfair procedures used in HR. Additionally, Article 22 of the UK General Data Protection Regulations (GDPR) gives individuals the right to not be the subject  to decisions based only on AI automated processing that can impact them.6If employers use  AI, they must inform applicants about automated decision-making, provide meaningful  explanations of the logic involved, allow human intervention and review.  

JUDICIAL INTERPRETATION AND CASE LAW 

UK case law provides useful guidance on how courts are likely to approach discrimination  claims involving AI, even though no major AI-specific case has yet reached the courts. In  ‘Essop v Home Office’7, the Supreme Court concluded that claimants do not need to show  the cause of a disadvantage. This principle is valuable for AI cases because applicants would  not have to explain how an obscure algorithm created the disadvantage, only that it did. In  ‘Nagarajan v London Regional Transport’8, the House of Lords confirmed that discrimination  can occur even without intent, recognising that bias can operate subtly. This makes the ruling  particularly relevant to AI systems trained on biased historical data. The duty of fairness in  assessment processes was further emphasised in ‘Project Management Institute v Latif’9, where the EAT stated that employers must consider reasonable adjustments during tests or  selection procedures. If an AI tool evaluates behaviour, tone, or facial expression in ways that  disadvantage disabled or neurodivergent applicants, this duty still applies.  

CRITICAL ANALYSIS 

There are several challenges in the Equality Act 2010 that make it difficult to deal with  discrimination caused by AI in HR. Even though the Act is broad, it was formed before  modern AI tools became common, so it does not directly refer to automated decision-making,  which creates issues about how it should be applied in these new situations. In practice, the  law becomes hard to use because many AI systems work like “black boxes,” revealing  extraordinarily little about how decisions are made. This makes it tough for applicants to  collect evidence or prove that a protected group was disadvantaged, even though the Supreme  Court in Essop v Home Office said that claimants only need to show the negative impact of a  rule, not explain why it happened. 10 

Another main issue is the unclear split in responsibility between employers and AI  developers. Employers are legally liable under section 39 in the Equality Act, but developers usually refuse to share information about how their systems work because of intellectual property defences, creating an accountability gap that current UK law does not resolve.11 The  duty to make reasonable adjustments for disabled people also becomes harder to enforce,  because AI tools that analyse behaviour, expressions, or tone may unintentionally  disadvantage disabled or neurodivergent applicants without employers realising it.12 Compared to other places, the UK’s approach is much weaker. The EU AI Act classifies  hiring using AI as “high-risk” and requires companies to conduct transparency checks,  document risks, and ensure human oversight.13 Similarly, New York City’s AEDT law requires annual bias audits for recruitment algorithms.14 These examples show that stronger  and clearer regulation is possible. Overall, while the Equality Act 2010 provides a useful  foundation, it does not work well in practice for AI-related discrimination because of limited  transparency, unclear responsibility, weak oversight, and the difficulty of proving harm.15 

RECENT DEVELOPMENTS 

  1. UK Regulatory Initiatives: 

The Equality and Human Rights Commission has ruled that employers need guidance  for discriminatory AI systems.16 

  1. EU AI Act 2024: 

This act ranks employment related AI as a ‘high risk’ and mandates risk assessments,  documentations, transparency and human oversight.17 

  1. Industry Trends: 

More companies are now proposing tools that evaluate AI for favouritism and  explanations of how it works. This is really helpful for UK employers to follow  equality laws undoubtedly.  

SUGGESTIONS/ WAY FORWARD 

  1. Mandatory Impact Assessments: 

A legal rule requiring employees to check for discrimination before using an AI  system would help the UK follow good international standards.  

  1. Expanded Transparency Duties: 

Applicants should be aware of when AI is being used and to accept significant  explanations. 

  1. Clarified Employer-Vendor Liability Framework: 

A system like the one in the UK GDPR, could be used to make it clearer who is  responsible for what, as it separates duties of the data controllers and processors.  

  1. Strengthening Disability Protections: 

Recruitment should implement alternative formats to avoid causing disadvantage to  disabled applicants.  

  1. Independent AI Auditing: 

Independents experts can check AI systems to help ensure they are unbiased.  

CONCLUSION 

Lastly, AI offers efficiency and extensibility in HR and recruitment, however it still creates discriminatory risks simultaneously. While the Equality Act 2010 offers a compelling legal  foundation for addressing these risks , practical application is trapped by evidential  obligations, opacity and a lack of regulation. To ensure fair hiring in an AI driven workforce,  there needs transparent obligations, clear responsibility rules and algorithmic assessments. 

BIBLIOGRAPHY 

LEGISLATION 

Equality Act 2010 

UK General Data Protection Regulation 

CASES 

Essop and others v Home Office (UK Border Agency) [2017] UKSC 27 

Nagarajan v London Regional Transport [1999] UKHL 36 

Project Management Institute v Latif (Employment Appeal Tribunal, 2007) Weeks v The Commissioner of the Metropolis (Employment Appeal Tribunal, 2021)

BOOKS 

Pasquale F, The Black Box Society: The Secret Algorithms That Control Money and Information  (Harvard University Press, 2015) 

REPORTS 

Equality and Human Rights Commission, Guidance on Artificial Intelligence and Algorithmic  Fairness (EHRC 2023) 

European Parliament and Council, Artificial Intelligence Act (2024) 

https://artificialintelligenceact.eu/high-level-summary/ accessed 27 November 2025.  

UK Government, AI Regulation White Paper (Department for Science, Innovation and  Technology 2023) 

WEBSITES 

Braganza KC N, ‘Essop & Others v Home Office landmark indirect race and age discrimination  claims finally settle for over £1 million three days into hearing’ (GardenCourtChambers.co.uk, 7  March 2019) https://gardencourtchambers.co.uk/essop-and-others-v-home-office-landmark indirect-race-and-age-discrimination-claims-finally-settle-for-over-1-million-three-days-into hearing/ accessed 27 November 2025.  

Tyler A, ‘Project Management Institute v Latif’ (StammeringLaw.org.uk, 9 June 2007)  https://www.stammeringlaw.org.uk/project-management-institute-v-latif/ accessed 27  November 2025.  

New York City Department of Consumer and Worker Protection, ‘Local Law 144: Automated  Employment Decision Tools’ (2023) https://www.nyc.gov/site/dca/index.page accessed 28 November 2025 

Oppenheim M, ‘Amazon scraps ‘sexist AI’ recruitment tool’ Independent (London , 11 October  2018) https://www.independent.co.uk/tech/amazon-ai-sexist-recruitment-tool-algorithm a8579161.html accessed 26 November 2025

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top