Research

Paper

AI LLM February 25, 2026

Distill and Align Decomposition for Enhanced Claim Verification

Authors

Jabez Magomere, Elena Kochkina, Samuel Mensah, Simerjot Kaur, Fernando Acero, Arturo Oncevay, Charese H. Smiley, Xiaomo Liu, Manuela Veloso

Abstract

Complex claim verification requires decomposing sentences into verifiable subclaims, yet existing methods struggle to align decomposition quality with verification performance. We propose a reinforcement learning (RL) approach that jointly optimizes decomposition quality and verifier alignment using Group Relative Policy Optimization (GRPO). Our method integrates: (i) structured sequential reasoning; (ii) supervised finetuning on teacher-distilled exemplars; and (iii) a multi-objective reward balancing format compliance, verifier alignment, and decomposition quality. Across six evaluation settings, our trained 8B decomposer improves downstream verification performance to (71.75%) macro-F1, outperforming prompt-based approaches ((+1.99), (+6.24)) and existing RL methods ((+5.84)). Human evaluation confirms the high quality of the generated subclaims. Our framework enables smaller language models to achieve state-of-the-art claim verification by jointly optimising for verification accuracy and decomposition quality.

Metadata

arXiv ID: 2602.21857
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-02-25
Fetched: 2026-02-26 05:00

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.21857v1</id>\n    <title>Distill and Align Decomposition for Enhanced Claim Verification</title>\n    <updated>2026-02-25T12:32:04Z</updated>\n    <link href='https://arxiv.org/abs/2602.21857v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.21857v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Complex claim verification requires decomposing sentences into verifiable subclaims, yet existing methods struggle to align decomposition quality with verification performance. We propose a reinforcement learning (RL) approach that jointly optimizes decomposition quality and verifier alignment using Group Relative Policy Optimization (GRPO). Our method integrates: (i) structured sequential reasoning; (ii) supervised finetuning on teacher-distilled exemplars; and (iii) a multi-objective reward balancing format compliance, verifier alignment, and decomposition quality. Across six evaluation settings, our trained 8B decomposer improves downstream verification performance to (71.75%) macro-F1, outperforming prompt-based approaches ((+1.99), (+6.24)) and existing RL methods ((+5.84)). Human evaluation confirms the high quality of the generated subclaims. Our framework enables smaller language models to achieve state-of-the-art claim verification by jointly optimising for verification accuracy and decomposition quality.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-02-25T12:32:04Z</published>\n    <arxiv:comment>EACL Findings 2026</arxiv:comment>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Jabez Magomere</name>\n    </author>\n    <author>\n      <name>Elena Kochkina</name>\n    </author>\n    <author>\n      <name>Samuel Mensah</name>\n    </author>\n    <author>\n      <name>Simerjot Kaur</name>\n    </author>\n    <author>\n      <name>Fernando Acero</name>\n    </author>\n    <author>\n      <name>Arturo Oncevay</name>\n    </author>\n    <author>\n      <name>Charese H. Smiley</name>\n    </author>\n    <author>\n      <name>Xiaomo Liu</name>\n    </author>\n    <author>\n      <name>Manuela Veloso</name>\n    </author>\n  </entry>"
}