Authored By: Zainab ramzan
Coventry University
Abstract:
Artificial intelligence (AI) included in hiring procedures has transformed hiring practices by providing scalability and efficiency. It has, nevertheless, also presented major difficulties, especially in sustaining or magnifying prejudices that lead to discriminatory results against protected groups. With an eye towards how anti-discrimination laws including the Equality Act 2010 (UK), Title VII of the Civil Rights Act (US), and data protection rules like the GDPR interact with AI-driven discrimination in recruitment, this paper investigates the legal ramifications of this. It examines how inadequate regulatory control, opaque decision-making procedures, and faulty training data produce algorithmic bias. Through case studies and legal analysis, the paper emphasizes important difficulties in tackling artificial intelligence discrimination, including evidentiary constraints and responsibility gaps for companies and AI developers. Recommendations call for strong legislative reforms, regular audits, and improved algorithmic openness to reduce bias and encourage innovation. This conversation emphasizes the pressing need for a well-balanced strategy guaranteeing justice, safeguarding of individual liberties, and matching technical developments with legal doctrines.
Introduction:
In recruiting, artificial intelligence (AI) has become a transforming tool that changes how companies find, examine, and choose applicants. AI provides unmatched cost-effectiveness and efficiency by automating chores including candidate matching, resume screening, even interview analysis. This technical development does not without difficulties, though. Although artificial intelligence systems are sometimes sold as objective tools for decision-making, they are prone to biases that might produce discriminating results. These prejudices might result from algorithmic design, faulty training data, or opaque decision-making process.
AI discrimination in hiring begs important moral and legal concerns. Under many laws, including the Equality Act 2010 in the United Kingdom and Title VII of the Civil Rights Act in the United States, discrimination in employment is forbidden. These rules seek to shield people from unfair treatment based on traits including race, gender, age, or disability. Nevertheless, the special character of artificial intelligence systems complicates the application of these legal models. For instance, proving discrimination gets harder when opaque algorithms instead of human recruiters makes decisions.
This paper explores the urgent problem of artificial intelligence discrimination in hiring with an eye towards legal ramifications and challenges. It looks at how current anti-discrimination rules handle—or neglect—biases in AI-driven hiring practices. Using case studies and legal analysis, it investigates the challenges in holding companies or artificial intelligence developers responsible for discriminatory practices. Moreover, it underlines the wider social consequences of unbridled algorithmic bias in the workplace.
One cannot stress the need of tackling artificial intelligence discrimination. Decisions about recruitment have great effects on people’s paths of employment and way of life. Unregulated biassed artificial intelligence systems could reinforce public mistrust of technology and support of systematic inequality. This paper seeks to present a thorough investigation of the problem together with some doable suggestions to guarantee fairness and responsibility in AI-driven hiring policies. This helps to support continuous debates on how to strike a balance in the digital era between innovation and legal compliance and ethical responsibility.
Legal Framework, Definitions:
The legal mechanisms limiting artificial intelligence discrimination in recruiting are based on anti-discrimination and data protection regulations that provide employment fairness and responsibility. This portion discusses the key legislation clauses—the Equality Act 2010 (UK), Title VII of the Civil Rights Act of 1964 (US), and data privacy standards like the GDPR. Important concepts like algorithmic bias, direct, indirect, and differential impact are defined here.
Equality Act 2010 UK
Anti-discrimination law in the UK is based on the Equality Act 2010. It prohibits discrimination in numerous areas, including employment, based on age, handicap, gender reassignment, race, religion or belief, sex, and sexual orientation. The Act includes direct and indirect discrimination in AI-driven hiring:
- Direct discrimination is unfair treatment based on a protected characteristic. An AI system that clearly excludes applicants of a certain gender or race would constitute direct discrimination under sections 39(1)(a)[1] and 39(3)(a)[2] of the Equality Act 2010.
Example:
The employment tribunal in Smith v. Acme Corp [2024] UKET/0123 [hypothetical][3] found direct discrimination when an AI screening tool automatically rejected female candidates based on gender-coded keywords in their resumes.
- Indirect discrimination occurs when a neutral regulation, criteria, or practice unfairly disfavour a protected feature. An artificial intelligence program using biassed training data or criteria might inadvertently disfavour some groups under Section 19(1)[4] of the Act. In favouring applicants with socioeconomically privileged educational backgrounds, an AI system may subtly bias against minority groups.
Example:
Such is the case: EWCA Civ 456 (hypothetical) in Jones v. Future Tech Ltd [2023][5] explored whether an artificial intelligence system’s preference for extracurricular activities indirectly discriminated against low-income applicants.
Section 20 of the Equality Act 2010 requires AI companies to make reasonable changes[6]. This includes ensuring that automated systems’ alternate application forms do not prejudice impaired candidates.
Title VII of the Civil Rights Act of 1964 (US):
Title VII of the US Civil Rights Act prohibits employment discrimination based on race, colour, religion, sex (including pregnancy and gender identity), or national origin. The law concerns two primary types of AI discrimination:
- Disparate treatment:
Artificial intelligence-based discrimination by employers that excludes persons with protected qualities. - Disparate impact: Unintentional discrimination is discrepancy, when a neutral activity or tool disproportionately affects a protected group. An AI system rating applicants by speech patterns may unintentionally disfavour neurodiverse persons or non-native English speakers.
The Equal Employment Opportunity Commission (EEOC) has issued Title VII[7] guidelines for AI systems. It emphasizes how corporations must evaluate if their AI systems harm others and comply with anti-discrimination laws.
GDPR, General Data Protection Regulation:
The GDPR regulates data security across the EU and affects AI-driven hiring. Article 22 of the GDPR concerns automated decision-making that affects people.
- Transparency: When using AI, companies must inform candidates about how AI systems manage their data.
- Fairness: Data should not be discriminated by algorithms.
- Accountability: Businesses must conduct periodic impact assessments and audits to demonstrate GDPR compliance.
Key Definition:
- Algorithmic bias: Bad algorithmic design or biassed training data cause systematic, unjust discrimination. Past hiring data might be used to teach an algorithm to mimic biased rules.
- Direct discrimination: Negative treatment based on protected features.
- Indirect discrimination: Neutral indirect discrimination adversely affects persons with protected features.
- disparate impact: Under US law, “disparate impact” occurs when neutral employment practices unfairly exclude a protected group.
The intersection between laws:
These methods provide strong anti-discrimination protections, but their opacity and complexity make them challenging for AI systems to use. As such:
- The UK Equality Act 2010 defines an algorithm as a “provision, criterion, or practice” if it causes indirect discrimination under section 19(1)[8].
- Title VII requires US corporations to assess whether their AI products have distinct consequences. This overlap of legal standards shows the need for explicit legislation to address artificial intelligence’s employment challenges.
Examples and Case Studies:
Several well-known case studies and examples show how algorithmic bias can yield discriminatory results, exhibiting AI discrimination in employment. These examples demonstrate how difficult identifying and correcting bias in AI-powered employment systems is.
Amazon’s Resume Screening tool:
The Amazon resume screening tool is commonly cited as an example of algorithmic bias. This 2014 algorithm trained on resumes from men over 10 years. Thus, the software punished resumes mentioning “women” or women’s college phrases, discriminating against female candidates[9],[10]. Amazon discontinued the tool because it could not guarantee impartiality.
Face Recognition Hire Vue:
A federal public interest organization sued Hire Vue, an AI hiring company, for fraud in 2019. The program unfairly excluded minority candidates by favouring certain face expressions, speaking skills, and voice tones. Hire Vue stopped using facial recognition, but speech pattern biases persist in other biometric data[11].
The Google CV Screening Program:
Google’s AI-powered CV screening tool was criticised for downgrading CVs featuring female words. This challenge was solved by removing the tool and emphasising AI algorithm research[12].
Money bank case study:
Money Bank potentially used computerised profiling to assess financial analyst candidates. Alice (female), Frank (black), and James (61 years old) questioned their rejection, thinking of racism. Money Bank could not provide specified criteria even using a reputable AI system due to the algorithm’s opaqueness[13].
Case Strategy:
Despite her stellar qualifications, Strategies AI rejected promising computer science student Hara. Her claim prompted a US anti-discrimination investigation that found data usage concerns and suspected discrimination[14].
Key Case Study Lessons:
- Prejudice by algorithm: These events demonstrate how AI systems may perpetuate biases in algorithm design or training data, resulting in discriminatory results.
- Lack of transparency: AI decision-making processes hamper anti-discrimination efforts.
- Difficulties: Current legislation struggles to regulate AI-driven discrimination, thus more specific guidance and control are needed.
These examples show how awareness and aggressiveness might minimize bias in AI hiring tools.
Training Data Bias: Reasons and Effects:
- Historical Perceptions: Often reflecting prior employment policies, training statistics may have been skewed. Limited Data Sets: If a corporation has historically favoured male candidates, the AI system trained on this data would most certainly duplicate this bias, therefore disadvantaging female applicants[15].
- Limited data sets: AI systems may not fairly evaluate applicants from under-represented backgrounds when training data is few or not representative of various groups. As observed in examples where artificial intelligence programs preferred applicants with lighter skin tones over those with darker skin tones, this can lead to biased results.
- Designer bias: By their decisions on data aspects or criteria, the engineers creating artificial intelligence systems may unwittingly bring prejudices. If gender is regarded as a criterion, it can affect the candidate evaluation process by which the algorithm produces gender-based discrimination.[16]
Examples of BIAS in training data:
- Amazon’s hiring tool: Training on resumes sent over a decade, mostly from males, Amazon created an artificial intelligence recruiting tool. This produced the algorithm degrading resumes using phrases like “women” or those connected to women’s universities, therefore discriminating against female candidates.
- Google’s CV screening tool: Google’s artificial intelligence-powered CV screening tool drew fire for showing gender prejudice by degrading resumes with phrases linked to women. The training data of the instrument revealed underlying gender prejudices, which explained this bias[17].
Mitigating bias in training data:
To address these issues, organizations should:
- Ensure diverse training data: Train AI systems using different and representative datasets to lower the possibility of biased results.
- Implement Algorithmic Transparency Use algorithmic transparency to make artificial intelligence decision-making open to see and fix biases. Frequent audits help to identify and fix AI system biases.
- Regular audits: These actions will help companies to guarantee more fair employment policies and reduce the influence of biased training data.
Opposing Perspectives on AI in Recruitment:
Using artificial intelligence in hiring is a divisive topic; supporters claim it improves objectivity and efficiency while detractors point out its possible for discrimination and prejudice. Here are some contrasting viewpoints on artificial intelligence in hiring:
Those who support: AI’s advantages in recruitment.
- Efficiency and Speed: By processing hundreds of applications in seconds, artificial intelligence (AI) lowers the time-to-hire and enhances the applicant experience by giving faster responses.
- Objectivity and reduced human bias: AI systems are sometimes regarded as more objective than human recruiters, hence maybe lessening implicit prejudices in employment choices. Instead of human traits, they emphasize abilities and credentials.
- Enhanced analytics: Improved analytics from artificial intelligence enable companies to monitor diversity KPIs and streamline their recruiting practices throughout time.
- cost-effectiveness: AI can save recruiting expenses and raise the quality of hire by automating jobs such as resume screening and interview scheduling.
Critics:
Issues and worries:
- algorithmic bias: Critics of artificial intelligence systems contend that, given faulty or little datasets, they might reinforce already ingrained prejudices.
- Lack of Transparency: This might result in discriminating results against specific groups, including women or minority.
- Inaccurate candidate assessment: Some experts believe AI tools may mistakenly screen out highly qualified candidates due of prejudices in the algorithms or training data.
- Ethical concerns: The opaque character of AI decision-making processes makes it difficult to identify and challenge biases. Complicating efforts to hold employers accountable for discriminatory practices.
Using artificial intelligence for hiring begs ethical issues of justice, responsibility, and the possibility of aggravating socioeconomic inequality.
Balancing Perspectives:
Although artificial intelligence has great advantages in terms of efficiency and possible objectivity, it is imperative to solve issues of transparency and bias. By using broad training datasets, algorithmic openness, and frequent audits, one may try to reduce these risks and guarantee that artificial intelligence improves rather than compromises justice in hiring.
Recommendation:
Companies should use a multifarious strategy addressing technological and ethical issues to reduce AI prejudice in recruiting. These are some important suggestions:
Technical Recommendation:
- Diversity training data: Make sure artificial intelligence systems are taught on varied and representative datasets to lower the likelihood of bias continuation. Apply Algorithm Audits and integrate into the training data a broad spectrum of demographics, backgrounds, and experiences.
- Implement algorithm audits: Frequent audits of artificial intelligence algorithms help to find and fix biases. This might entail working with outside auditors to offer an objective analysis.
- Using fairness-aware algorithm: Apply algorithms with fairness-aware awareness: Choose methods meant with fairness requirements to re-weight or re-sample data and lower bias.
- Adversarial training: Where feasible, use adversarial training methods whereby neural networks are pitted against one another to identify and reduce bias.
Regulatory and Ethical Recommendation:
- Create Explicit Objectives and Goals: Clearly state and explain objectives about diversity, inclusiveness, and equal opportunities in AI hiring practices.
- Promote transparency: Encourage openness in artificial intelligence decision-making by offering analysis of how choices are reached. Use ethical guidelines to assist applicants and staff develop confidence.
- Implement ethical guidelines: Create and explain moral rules for the application of artificial intelligence in hiring, including frequency of audits and handling of biases3.
- Balance AI with human judgment: Make sure AI improves rather than replaces human procedures. Catch and correction of biases that artificial intelligence could overlook depends on human supervision.
Legal and regulatory suggestions:
- Compliance with Anti-discrimination: AI systems should follow current anti-discrimination regulations like Title VII of the Civil Rights Act (US) and the Equality Act 2010 (UK).
- Data protection compliance: Following GDPR and other data protection rules will help to preserve candidate data and guarantee privacy.
- Legislative reforms Promote legal changes that especially target artificial intelligence discrimination in hiring, therefore giving companies and developers more precise direction.
Following these suggestions can help companies lower the danger of artificial intelligence discrimination, create a more inclusive workforce, and match technology developments with moral and legal norms.
Conclusion:
Using artificial intelligence in hiring procedures has given companies efficiency and scalability, thereby changing the way they choose and recognize prospects. But this technical development also brings major difficulties, especially in relation to prejudice and bias. AI systems can reinforce or magnify already current prejudices in training data, hence producing discriminating results against protected groups.
The legal systems controlling artificial intelligence discrimination in employment have been discussed in this paper together with Title VII of the Civil Rights Act (US) and the Equality Act 2010 (UK). It underlined example cases showing how artificial intelligence bias may lead to discriminatory treatment of candidates depending on gender, race, or another protected attribute. Also covered were the difficulties in tackling AI discrimination including algorithmic opacity and legal loopholes.
Recommendations such diversifying training data, running algorithm audits, and encouraging openness in artificial intelligence decision-making procedures were put out to help to alleviate these problems. Furthermore, achieving compliance with anti-discrimination legislation and protection of candidate rights depends on legislative improvements especially addressing AI discrimination in recruiting.
Even if artificial intelligence has great ability to improve recruiting efficiency, the ethical and legal issues related to its application must be resolved. Organisations may maximise the advantages of artificial intelligence by aggressively reducing prejudice and guaranteeing openness, therefore promoting a fair and inclusive workforce.
Bibliography:
Primary sources:
Legislation:
Equality Act 2010 (c.15)
Equality Act 2010 (UK), s 39(1)(a)
Equality Act 2010 (UK), s 39(3)(a)
Equality Act 2010, s 19(1)
Equality Act 2010 (UK), s 20
Title VII of the Civil Rights Act of 1964, 42 U.S.C
Regulation 2016/679 on the protection of natural persons with regard to the processing of personal data and the free movement of such data art.22
Cases:
Smith v. Acme Corp [2024] UKET/0123 (hypothetical).
Jones v. FutureTech Ltd [2023] EWCA Civ 456 (hypothetical)
Secondary sources:
Journal articles:
- Dahlstrom A, Campbell M and Hewitt C, “Mitigating Uncertainty Using Alternative Information Sources and Expert Judgement in Aquatic Non-Indigenous Species Risk Assessment” (2012) 7 Aquatic Invasions 567
- Bogusz SL James Merricks White, and Claire Ingram, “The Artificial Recruiter: Risks of Discrimination in Employers’ Use of AI and Automated Decision-Making”
- Intahchomphoo C and others, “Effects of Artificial Intelligence and Robotics on Human Labour: A Systematic Review” (2024) 24 Legal Information Management 109
Websites:
- Amy-Cutbill, “Bias and Fairness in AI-Driven Hiring Practices” (Horton International | We can help you achieve your goals, get in touch today, July 16, 2024) <https://hortoninternational.com/addressing-bias-and-fairness-in-ai-driven-hiring-practices/>
- Anderson A, “Legal and Ethical Risks of Using AI in Hiring” (Recruitics, LLC, October 2, 2024) <https://info.recruitics.com/blog/legal-and-ethical-risks-of-using-ai-in-hiring>
- “Blog – When Machines Discriminate: Addressing Algorithmic Bias in Recruitment” (Redline Group) <https://www.redlinegroup.com/insight-details/when-machines-discriminate-addressing-algorithmic-bias-in-recruitment>
- Bishop J, “AI in Recruiting: Pros Vs. Cons of Hiring with Artificial Intelligence” <https://www.helioshr.com/blog/ai-in-recruiting-pros-vs.-cons-of-hiring-with-artificial-intelligence>
- Chen Z, “Ethics and Discrimination in Artificial Intelligence-Enabled Recruitment Practices” (2023) 10 Humanities and Social Sciences Communications 1
- “Challenges of Adopting AI in Recruitment [2024]” (Carv – AI for Recruiters) <https://www.carv.com/blog/challenges-of-adopting-ai-in-recruitment>
- Data I and Team A, “Shedding Light on AI Bias with Real World Examples” (IBM) <https://www.ibm.com/think/topics/shedding-light-on-ai-bias-with-real-world-examples>
- Davies J, “Discrimination and Bias in AI Recruitment: A Case Study” (Lewis Silkin, October 31, 2023) <https://www.lewissilkin.com/insights/2023/10/31/discrimination-and-bias-in-ai-recruitment-a-case-study>
- Deady D, “How to Avoid Bias When Using AI for Sourcing” (SocialTalent, February 14, 2024) <https://www.socialtalent.com/blog/recruiting/how-to-avoid-bias-when-using-ai-for-sourcing>
- Fullen C, “AI in Recruiting 2024: Pros and Cons” Korn Ferry (August 6, 2024) <https://www.kornferry.com/insights/featured-topics/talent-recruitment/ai-in-recruiting-navigating-trends-for-2024>
- Hermele D, “Demystifying Bias in AI: Lessons from Amazon’s Sexist Recruiting AI” (Tengai, August 29, 2023) <https://tengai.io/blog/demystifying-bias-in-ai-lessons-from-amazons-sexist-recruiting-engine>
- V, “Ethical Considerations of AI-Driven Recruitment: A Detailed Guide” (December 8, 2023) <https://www.linkedin.com/pulse/ethical-considerations-ai-driven-recruitment-detailed-guide-q56ze/>
- Intezari A, “What Will a Robot Make of Your Résumé? The Bias Problem with Using AI in Job Recruitment” (The Conversation, June 9, 2024) <https://theconversation.com/what-will-a-robot-make-of-your-resume-the-bias-problem-with-using-ai-in-job-recruitment-231174>
- Laker B, “The Dark Side of AI Recruiting: Depersonalization and Its Consequences on The Modern Job Market” Forbes (July 7, 2023) <https://www.forbes.com/sites/benjaminlaker/2023/07/07/the-dark-side-of-ai-recruiting-depersonalization-and-its-consequences-on-the-modern-job-market/>
- “Learn How AI Hiring Bias Can Impact Your Recruitment Process” (VidCruiter, May 27, 2024) <https://vidcruiter.com/interview/intelligence/ai-bias/>
- Lytton C, “AI Hiring Tools May Be Filtering out the Best Job Applicants” BBC (February 16, 2024) <https://www.bbc.com/worklife/article/20240214-ai-recruiting-hiring-software-bias-discrimination> accessed February 13, 2025
- Malik A, “AI Bias In Recruitment: Ethical Implications and Transparency” Forbes (September 25, 2023) <https://www.forbes.com/councils/forbestechcouncil/2023/09/25/ai-bias-in-recruitment-ethical-implications-and-transparency/> accessed February 13, 2025
- Payne B, “Breaking Barriers: Tackling Bias in AI Recruitment for Fair and Inclusive Hiring” (Gem, January 24, 2024) <https://www.gem.com/blog/ai-recruitment-bias>
- “{{Procurement Recruitment Industry Insights and Trends}}” <https://www.langleysearch.com/blog/2024/04/uncovering-bias-in-ai-driven-recruitment-challenges-solutions-and-real-life-examples?source=perplexity.ai>
- Proffitt K, “AI in Recruitment Offers 7 Exceptional Benefits for HR” (HRMorning, August 31, 2023) <https://www.hrmorning.com/articles/top-benefits-of-ai-in-recruitment/>
- Rafi M, “When AI Plays Favourites: How Algorithmic Bias Shapes the Hiring Process” (The Conversation, October 14, 2024) <https://theconversation.com/when-ai-plays-favourites-how-algorithmic-bias-shapes-the-hiring-process-239471>
- Swift J, “Algorithmic Bias in Job Hiring” (Gender Policy Report, July 22, 2024) <https://genderpolicyreport.umn.edu/algorithmic-bias-in-job-hiring/>
- Tales F, “Case Studies: When AI and CV Screening Goes Wrong” (Fairness Tales) <https://www.fairnesstales.com/p/issue-2-case-studies-when-ai-and-cv-screening-goes-wrong>
- Team H, “7 Top Benefits of AI in Recruiting” (Harver, February 27, 2020) <https://harver.com/blog/benefits-ai-in-recruiting/>
- Virchaux D, Renick M and Lloyd T, “AI in Talent Acquisition: Top Challenges for 2025” Korn Ferry (December 9, 2024) <https://www.kornferry.com/insights/featured-topics/talent-recruitment/ai-in-talent-acquisition-top-challenges-for-2025>
- Zaker Ul Oman, Ayesha Siddiqua and Ruqia Noorain, “Artificial Intelligence and Its Ability to Reduce Recruitment Bias” (2024) 24 World Journal of Advanced Research and Reviews 551
[1] Equality Act 2010 (UK), s 39(1)(a)
[2] Equality Act 2010 (UK), s 39(3)(a)
[3] Smith v. Acme Corp [2024] UKET/0123 (hypothetical).
[4] Equality Act 2010, s 19(1)
[5] Jones v. FutureTech Ltd [2023] EWCA Civ 456 (hypothetical)
[6] Equality Act 2010 (UK), s 20
[7] Title VII of the Civil Rights Act of 1964, 42 U.S.C
[8] Equality Act 2010 (UK), s 19(1).
[9] Redline Group, ‘When Machines Discriminate: Addressing Algorithmic Bias in Recruitment’
[10] Nature, ‘Ethics and discrimination in artificial intelligence-enabled recruitment’
[11] The Conversation, ‘When AI plays favourites: How algorithmic bias shapes the hiring process’
[12] Langley Search, ‘Uncovering Bias in AI-Driven Recruitment: Challenges, Solutions, and Real-Life Examples’
[13] Lewis Silkin, ‘Discrimination and Bias in AI Recruitment: A Case Study’
[14] Princeton Dialogues on AI and Ethics, ‘Hiring by Machine’
[15] Nature, ‘Ethics and discrimination in artificial intelligence-enabled recruitment’
[16] World Journal of Advanced Research and Reviews, ‘Artificial Intelligence and its ability to reduce recruitment bias’ (2024)
[17] Langley Search, ‘Uncovering Bias in AI-Driven Recruitment: Challenges, Solutions, and Real-Life Examples’