Research

Paper

AI LLM March 06, 2026

Human, Algorithm, or Both? Gender Bias in Human-Augmented Recruiting

Authors

Mesut Kaya, Toine Bogers

Abstract

Recent years have seen rapid growth in the market for HR technology and AI-driven HR solutions in particular. This popularity has also resulted in increased attention to the negative aspects of using AI to support hiring practices, such as the risk of reinforcing existing biases against vulnerable groups based on gender or other sensitive attributes. Combining human experience with AI efficiency in making recruiting and selection decisions has the potential to help mitigate these biases, but despite a considerable amount of research on fairness in algorithmic hiring, actual empirical evaluations comparing the fairness of human, AI, and human-augmented decision-making remain scarce. In this study, we address this gap by presenting a quantitative analysis of gender bias across three scenarios of a real-world recruitment platform: (1) recruiters searching a CV database manually for relevant candidates, (2) AI-driven matching between candidates and jobs, and (3) a combination of human and AI-driven recruiting. We find that human recruiters produce lists of candidates that are fairer in terms of gender than the AI-only solution, with more deliberation by humans resulting in fairer outcomes. However, the combination of human and AI-driven is more than the sum of its parts and produces the fairest candidate lists: interacting with the slate of recommended candidates first before manually searching for additional candidates has a beneficial effect on the gender fairness of the set of candidates that are viewed, clicked, and contacted afterwards. Our work provides one of the first empirical comparisons of fairness across human, AI, and hybrid recruiting processes, offering evidence to inform the development of more equitable hiring practices and highlighting the importance of human oversight for mitigating bias in algorithmic hiring.

Metadata

arXiv ID: 2603.06240
Provider: ARXIV
Primary Category: cs.CY
Published: 2026-03-06
Fetched: 2026-03-09 06:05

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.06240v1</id>\n    <title>Human, Algorithm, or Both? Gender Bias in Human-Augmented Recruiting</title>\n    <updated>2026-03-06T13:01:06Z</updated>\n    <link href='https://arxiv.org/abs/2603.06240v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.06240v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Recent years have seen rapid growth in the market for HR technology and AI-driven HR solutions in particular. This popularity has also resulted in increased attention to the negative aspects of using AI to support hiring practices, such as the risk of reinforcing existing biases against vulnerable groups based on gender or other sensitive attributes. Combining human experience with AI efficiency in making recruiting and selection decisions has the potential to help mitigate these biases, but despite a considerable amount of research on fairness in algorithmic hiring, actual empirical evaluations comparing the fairness of human, AI, and human-augmented decision-making remain scarce. In this study, we address this gap by presenting a quantitative analysis of gender bias across three scenarios of a real-world recruitment platform: (1) recruiters searching a CV database manually for relevant candidates, (2) AI-driven matching between candidates and jobs, and (3) a combination of human and AI-driven recruiting. We find that human recruiters produce lists of candidates that are fairer in terms of gender than the AI-only solution, with more deliberation by humans resulting in fairer outcomes. However, the combination of human and AI-driven is more than the sum of its parts and produces the fairest candidate lists: interacting with the slate of recommended candidates first before manually searching for additional candidates has a beneficial effect on the gender fairness of the set of candidates that are viewed, clicked, and contacted afterwards. Our work provides one of the first empirical comparisons of fairness across human, AI, and hybrid recruiting processes, offering evidence to inform the development of more equitable hiring practices and highlighting the importance of human oversight for mitigating bias in algorithmic hiring.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CY'/>\n    <published>2026-03-06T13:01:06Z</published>\n    <arxiv:primary_category term='cs.CY'/>\n    <author>\n      <name>Mesut Kaya</name>\n    </author>\n    <author>\n      <name>Toine Bogers</name>\n    </author>\n  </entry>"
}