Research

Paper

AI LLM February 27, 2026

LFQA-HP-1M: A Large-Scale Human Preference Dataset for Long-Form Question Answering

Authors

Rafid Ishrak Jahan, Fahmid Shahriar Iqbal, Sagnik Ray Choudhury

Abstract

Long-form question answering (LFQA) demands nuanced evaluation of multi-sentence explanatory responses, yet existing metrics often fail to reflect human judgment. We present LFQA-HP-1M, a large-scale dataset comprising 1.3M human pairwise preference annotations for LFQA. We propose nine rubrics for answer quality evaluation, and show that simple linear models based on these features perform comparably to state-of-the-art LLM evaluators. We further examine transitivity consistency, positional bias, and verbosity biases in LLM evaluators and demonstrate their vulnerability to adversarial perturbations. Overall, this work provides one of the largest public LFQA preference datasets and a rubric-driven framework for transparent and reliable evaluation.

Metadata

arXiv ID: 2602.23603
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-02-27
Fetched: 2026-03-02 06:04

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.23603v1</id>\n    <title>LFQA-HP-1M: A Large-Scale Human Preference Dataset for Long-Form Question Answering</title>\n    <updated>2026-02-27T02:14:15Z</updated>\n    <link href='https://arxiv.org/abs/2602.23603v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.23603v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Long-form question answering (LFQA) demands nuanced evaluation of multi-sentence explanatory responses, yet existing metrics often fail to reflect human judgment. We present LFQA-HP-1M, a large-scale dataset comprising 1.3M human pairwise preference annotations for LFQA. We propose nine rubrics for answer quality evaluation, and show that simple linear models based on these features perform comparably to state-of-the-art LLM evaluators. We further examine transitivity consistency, positional bias, and verbosity biases in LLM evaluators and demonstrate their vulnerability to adversarial perturbations. Overall, this work provides one of the largest public LFQA preference datasets and a rubric-driven framework for transparent and reliable evaluation.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.IR'/>\n    <published>2026-02-27T02:14:15Z</published>\n    <arxiv:comment>LREC 2026 Accepted. https://huggingface.co/datasets/nlpatunt/LFQA-HP-1M</arxiv:comment>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Rafid Ishrak Jahan</name>\n    </author>\n    <author>\n      <name>Fahmid Shahriar Iqbal</name>\n    </author>\n    <author>\n      <name>Sagnik Ray Choudhury</name>\n    </author>\n  </entry>"
}