Research

Paper

AI LLM March 10, 2026

LLM as a Meta-Judge: Synthetic Data for NLP Evaluation Metric Validation

Authors

Lukáš Eigler, Jindřich Libovický, David Hurych

Abstract

Validating evaluation metrics for NLG typically relies on expensive and time-consuming human annotations, which predominantly exist only for English datasets. We propose \textit{LLM as a Meta-Judge}, a scalable framework that utilizes LLMs to generate synthetic evaluation datasets via controlled semantic degradation of real data, replacing human judgment. We validate our approach using \textit{meta-correlation}, measuring the alignment between metric rankings derived from synthetic data and those from standard human benchmarks. Experiments across Machine Translation, Question Answering, and Summarization demonstrate that synthetic validation serves as a reliable proxy for human judgment, achieving meta-correlations exceeding 0.9 in multilingual QA and proves to be a viable alternative where human judgments are unavailable or too expensive to obtain. Our code and data will become publicly available upon paper acceptance.

Metadata

arXiv ID: 2603.09403
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-10
Fetched: 2026-03-11 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.09403v1</id>\n    <title>LLM as a Meta-Judge: Synthetic Data for NLP Evaluation Metric Validation</title>\n    <updated>2026-03-10T09:15:19Z</updated>\n    <link href='https://arxiv.org/abs/2603.09403v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.09403v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Validating evaluation metrics for NLG typically relies on expensive and time-consuming human annotations, which predominantly exist only for English datasets. We propose \\textit{LLM as a Meta-Judge}, a scalable framework that utilizes LLMs to generate synthetic evaluation datasets via controlled semantic degradation of real data, replacing human judgment. We validate our approach using \\textit{meta-correlation}, measuring the alignment between metric rankings derived from synthetic data and those from standard human benchmarks. Experiments across Machine Translation, Question Answering, and Summarization demonstrate that synthetic validation serves as a reliable proxy for human judgment, achieving meta-correlations exceeding 0.9 in multilingual QA and proves to be a viable alternative where human judgments are unavailable or too expensive to obtain. Our code and data will become publicly available upon paper acceptance.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-10T09:15:19Z</published>\n    <arxiv:comment>16 pages, 1 figure, 14 tables</arxiv:comment>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Lukáš Eigler</name>\n    </author>\n    <author>\n      <name>Jindřich Libovický</name>\n    </author>\n    <author>\n      <name>David Hurych</name>\n    </author>\n  </entry>"
}