Research

Paper

AI LLM March 02, 2026

Beyond the Resumé: A Rubric-Aware Automatic Interview System for Information Elicitation

Authors

Harry Stuart, Masahiro Kaneko, Timothy Baldwin

Abstract

Effective hiring is integral to the success of an organisation, but it is very challenging to find the most suitable candidates because expert evaluation (e.g.\ interviews conducted by a technical manager) are expensive to deploy at scale. Therefore, automated resume scoring and other applicant-screening methods are increasingly used to coarsely filter candidates, making decisions on limited information. We propose that large language models (LLMs) can play the role of subject matter experts to cost-effectively elicit information from each candidate that is nuanced and role-specific, thereby improving the quality of early-stage hiring decisions. We present a system that leverages an LLM interviewer to update belief over an applicant's rubric-oriented latent traits in a calibrated way. We evaluate our system on simulated interviews and show that belief converges towards the simulated applicants' artificially-constructed latent ability levels. We release code, a modest dataset of public-domain/anonymised resumes, belief calibration tests, and simulated interviews, at \href{https://github.com/mbzuai-nlp/beyond-the-resume}{https://github.com/mbzuai-nlp/beyond-the-resume}. Our demo is available at \href{https://btr.hstu.net}{https://btr.hstu.net}.

Metadata

arXiv ID: 2603.01775
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-02
Fetched: 2026-03-03 04:34

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.01775v1</id>\n    <title>Beyond the Resumé: A Rubric-Aware Automatic Interview System for Information Elicitation</title>\n    <updated>2026-03-02T12:00:10Z</updated>\n    <link href='https://arxiv.org/abs/2603.01775v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.01775v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Effective hiring is integral to the success of an organisation, but it is very challenging to find the most suitable candidates because expert evaluation (e.g.\\ interviews conducted by a technical manager) are expensive to deploy at scale. Therefore, automated resume scoring and other applicant-screening methods are increasingly used to coarsely filter candidates, making decisions on limited information. We propose that large language models (LLMs) can play the role of subject matter experts to cost-effectively elicit information from each candidate that is nuanced and role-specific, thereby improving the quality of early-stage hiring decisions. We present a system that leverages an LLM interviewer to update belief over an applicant's rubric-oriented latent traits in a calibrated way. We evaluate our system on simulated interviews and show that belief converges towards the simulated applicants' artificially-constructed latent ability levels. We release code, a modest dataset of public-domain/anonymised resumes, belief calibration tests, and simulated interviews, at \\href{https://github.com/mbzuai-nlp/beyond-the-resume}{https://github.com/mbzuai-nlp/beyond-the-resume}. Our demo is available at \\href{https://btr.hstu.net}{https://btr.hstu.net}.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-02T12:00:10Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Harry Stuart</name>\n    </author>\n    <author>\n      <name>Masahiro Kaneko</name>\n    </author>\n    <author>\n      <name>Timothy Baldwin</name>\n    </author>\n  </entry>"
}