Research

Paper

AI LLM March 09, 2026

Toward Robust LLM-Based Judges: Taxonomic Bias Evaluation and Debiasing Optimization

Authors

Hongli Zhou, Hui Huang, Rui Zhang, Kehai Chen, Bing Xu, Conghui Zhu, Tiejun Zhao, Muyun Yang

Abstract

Large language model (LLM)-based judges are widely adopted for automated evaluation and reward modeling, yet their judgments are often affected by judgment biases. Accurately evaluating these biases is essential for ensuring the reliability of LLM-based judges. However, existing studies typically investigate limited biases under a single judge formulation, either generative or discriminative, lacking a comprehensive evaluation. To bridge this gap, we propose JudgeBiasBench, a benchmark for systematically quantifying biases in LLM-based judges. JudgeBiasBench defines a taxonomy of judgment biases across 4 dimensions, and constructs bias-augmented evaluation instances through a controlled bias injection pipeline, covering 12 representative bias types. We conduct extensive experiments across both generative and discriminative judges, revealing that current judges exhibit significant and diverse bias patterns that often compromise the reliability of automated evaluation. To mitigate judgment bias, we propose bias-aware training that explicitly incorporates bias-related attributes into the training process, encouraging judges to disentangle task-relevant quality from bias-correlated cues. By adopting reinforcement learning for generative judges and contrastive learning for discriminative judges, our methods effectively reduce judgment biases while largely preserving general evaluation capability.

Metadata

arXiv ID: 2603.08091
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-09
Fetched: 2026-03-10 05:43

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.08091v1</id>\n    <title>Toward Robust LLM-Based Judges: Taxonomic Bias Evaluation and Debiasing Optimization</title>\n    <updated>2026-03-09T08:32:21Z</updated>\n    <link href='https://arxiv.org/abs/2603.08091v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.08091v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large language model (LLM)-based judges are widely adopted for automated evaluation and reward modeling, yet their judgments are often affected by judgment biases. Accurately evaluating these biases is essential for ensuring the reliability of LLM-based judges. However, existing studies typically investigate limited biases under a single judge formulation, either generative or discriminative, lacking a comprehensive evaluation. To bridge this gap, we propose JudgeBiasBench, a benchmark for systematically quantifying biases in LLM-based judges. JudgeBiasBench defines a taxonomy of judgment biases across 4 dimensions, and constructs bias-augmented evaluation instances through a controlled bias injection pipeline, covering 12 representative bias types. We conduct extensive experiments across both generative and discriminative judges, revealing that current judges exhibit significant and diverse bias patterns that often compromise the reliability of automated evaluation. To mitigate judgment bias, we propose bias-aware training that explicitly incorporates bias-related attributes into the training process, encouraging judges to disentangle task-relevant quality from bias-correlated cues. By adopting reinforcement learning for generative judges and contrastive learning for discriminative judges, our methods effectively reduce judgment biases while largely preserving general evaluation capability.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-09T08:32:21Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Hongli Zhou</name>\n    </author>\n    <author>\n      <name>Hui Huang</name>\n    </author>\n    <author>\n      <name>Rui Zhang</name>\n    </author>\n    <author>\n      <name>Kehai Chen</name>\n    </author>\n    <author>\n      <name>Bing Xu</name>\n    </author>\n    <author>\n      <name>Conghui Zhu</name>\n    </author>\n    <author>\n      <name>Tiejun Zhao</name>\n    </author>\n    <author>\n      <name>Muyun Yang</name>\n    </author>\n  </entry>"
}