Research

Paper

AI LLM March 12, 2026

Gender Bias in Generative AI-assisted Recruitment Processes

Authors

Martina Ullasci, Marco Rondina, Riccardo Coppola, Antonio Vetrò

Abstract

In recent years, generative artificial intelligence (GenAI) systems have assumed increasingly crucial roles in selection processes, personnel recruitment and analysis of candidates' profiles. However, the employment of large language models (LLMs) risks reproducing, and in some cases amplifying, gender stereotypes and bias already present in the labour market. The objective of this paper is to evaluate and measure this phenomenon, analysing how a state-of-the-art generative model (GPT-5) suggests occupations based on gender and work experience background, focusing on under-35-year-old Italian graduates. The model has been prompted to suggest jobs to 24 simulated candidate profiles, which are balanced in terms of gender, age, experience and professional field. Although no significant differences emerged in job titles and industry, gendered linguistic patterns emerged in the adjectives attributed to female and male candidates, indicating a tendency of the model to associate women with emotional and empathetic traits, while men with strategic and analytical ones. The research raises an ethical question regarding the use of these models in sensitive processes, highlighting the need for transparency and fairness in future digital labour markets.

Metadata

arXiv ID: 2603.11736
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-03-12
Fetched: 2026-03-14 05:03

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.11736v1</id>\n    <title>Gender Bias in Generative AI-assisted Recruitment Processes</title>\n    <updated>2026-03-12T09:42:56Z</updated>\n    <link href='https://arxiv.org/abs/2603.11736v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.11736v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>In recent years, generative artificial intelligence (GenAI) systems have assumed increasingly crucial roles in selection processes, personnel recruitment and analysis of candidates' profiles. However, the employment of large language models (LLMs) risks reproducing, and in some cases amplifying, gender stereotypes and bias already present in the labour market. The objective of this paper is to evaluate and measure this phenomenon, analysing how a state-of-the-art generative model (GPT-5) suggests occupations based on gender and work experience background, focusing on under-35-year-old Italian graduates. The model has been prompted to suggest jobs to 24 simulated candidate profiles, which are balanced in terms of gender, age, experience and professional field. Although no significant differences emerged in job titles and industry, gendered linguistic patterns emerged in the adjectives attributed to female and male candidates, indicating a tendency of the model to associate women with emotional and empathetic traits, while men with strategic and analytical ones. The research raises an ethical question regarding the use of these models in sensitive processes, highlighting the need for transparency and fairness in future digital labour markets.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-12T09:42:56Z</published>\n    <arxiv:comment>4 pages, 4 figures</arxiv:comment>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Martina Ullasci</name>\n    </author>\n    <author>\n      <name>Marco Rondina</name>\n    </author>\n    <author>\n      <name>Riccardo Coppola</name>\n    </author>\n    <author>\n      <name>Antonio Vetrò</name>\n    </author>\n    <arxiv:doi>10.5281/zenodo.18242470</arxiv:doi>\n    <link href='https://doi.org/10.5281/zenodo.18242470' rel='related' title='doi'/>\n  </entry>"
}