Research

Paper

AI LLM February 24, 2026

InterPilot: Exploring the Design Space of AI-assisted Job Interview Support for HR Professionals

Authors

Zhengtao Xu, Zimo Xia, Zicheng Zhu, Nattapat Boonprakong, Yu-An Chen, Rabih Zbib, Casimiro Pio Carrino, Yi-Chieh Lee

Abstract

Recruitment interviews are cognitively demanding interactions in which interviewers must simultaneously listen, evaluate candidates, take notes, and formulate follow-up questions. To better understand these challenges, we conducted a formative study with eight HR professionals, from which we derived key design goals for real-time AI support. Guided by these insights, we developed InterPilot, a prototype system that augments interviews through intelligent note-taking and post-interview summary, adaptive question generation, and real-time skill-evidence mapping. We evaluated the system with another seven HR professionals in mock interviews using a within-subjects design. Results show that InterPilot reduced documentation burden without increasing overall workload, but introduced usability trade-offs related to visual attention and interaction complexity. Qualitative findings further reveal tensions around trust and verification when AI suggests highly specific technical questions. We discuss implications for designing future real-time human-AI collaboration in professional settings, highlighting the need to balance assistance granularity, attentional demands, and human agency.

Metadata

arXiv ID: 2602.20891
Provider: ARXIV
Primary Category: cs.HC
Published: 2026-02-24
Fetched: 2026-02-25 06:05

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.20891v1</id>\n    <title>InterPilot: Exploring the Design Space of AI-assisted Job Interview Support for HR Professionals</title>\n    <updated>2026-02-24T13:28:10Z</updated>\n    <link href='https://arxiv.org/abs/2602.20891v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.20891v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Recruitment interviews are cognitively demanding interactions in which interviewers must simultaneously listen, evaluate candidates, take notes, and formulate follow-up questions. To better understand these challenges, we conducted a formative study with eight HR professionals, from which we derived key design goals for real-time AI support. Guided by these insights, we developed InterPilot, a prototype system that augments interviews through intelligent note-taking and post-interview summary, adaptive question generation, and real-time skill-evidence mapping. We evaluated the system with another seven HR professionals in mock interviews using a within-subjects design. Results show that InterPilot reduced documentation burden without increasing overall workload, but introduced usability trade-offs related to visual attention and interaction complexity. Qualitative findings further reveal tensions around trust and verification when AI suggests highly specific technical questions. We discuss implications for designing future real-time human-AI collaboration in professional settings, highlighting the need to balance assistance granularity, attentional demands, and human agency.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.HC'/>\n    <published>2026-02-24T13:28:10Z</published>\n    <arxiv:comment>7 pages, 2 figures</arxiv:comment>\n    <arxiv:primary_category term='cs.HC'/>\n    <author>\n      <name>Zhengtao Xu</name>\n    </author>\n    <author>\n      <name>Zimo Xia</name>\n    </author>\n    <author>\n      <name>Zicheng Zhu</name>\n    </author>\n    <author>\n      <name>Nattapat Boonprakong</name>\n    </author>\n    <author>\n      <name>Yu-An Chen</name>\n    </author>\n    <author>\n      <name>Rabih Zbib</name>\n    </author>\n    <author>\n      <name>Casimiro Pio Carrino</name>\n    </author>\n    <author>\n      <name>Yi-Chieh Lee</name>\n    </author>\n  </entry>"
}