Research

Paper

AI LLM March 25, 2026

Towards Reward Modeling for AI Tutors in Math Mistake Remediation

Authors

Kseniia Petukhova, Ekaterina Kochmar

Abstract

Evaluating the pedagogical quality of AI tutors remains challenging: standard NLG metrics do not determine whether responses identify mistakes, scaffold reasoning, or avoid revealing the answers. For the task of mistake remediation, we derive a hierarchy of pedagogical aspects from human pairwise preferences on MRBench, and synthesize minimally contrastive response pairs that differ along key aspects (e.g., mistake identification and location, targetedness, scaffolding, actionability, clarity, and coherence). We develop and release Bradley-Terry preference models trained on weighted-sum rankings that we automatically create from MRBench, synthetic pairs, and data combinations. Using only synthetic data, our best model reaches 0.69 pairwise accuracy on a human preference test, and combining weighted-sum data with targeted synthetic groups improves accuracy to 0.74, outperforming larger general-purpose reward models while using only a 0.5B-parameter backbone.

Metadata

arXiv ID: 2603.24375
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-25
Fetched: 2026-03-26 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.24375v1</id>\n    <title>Towards Reward Modeling for AI Tutors in Math Mistake Remediation</title>\n    <updated>2026-03-25T14:56:08Z</updated>\n    <link href='https://arxiv.org/abs/2603.24375v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.24375v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Evaluating the pedagogical quality of AI tutors remains challenging: standard NLG metrics do not determine whether responses identify mistakes, scaffold reasoning, or avoid revealing the answers. For the task of mistake remediation, we derive a hierarchy of pedagogical aspects from human pairwise preferences on MRBench, and synthesize minimally contrastive response pairs that differ along key aspects (e.g., mistake identification and location, targetedness, scaffolding, actionability, clarity, and coherence). We develop and release Bradley-Terry preference models trained on weighted-sum rankings that we automatically create from MRBench, synthetic pairs, and data combinations. Using only synthetic data, our best model reaches 0.69 pairwise accuracy on a human preference test, and combining weighted-sum data with targeted synthetic groups improves accuracy to 0.74, outperforming larger general-purpose reward models while using only a 0.5B-parameter backbone.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-25T14:56:08Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Kseniia Petukhova</name>\n    </author>\n    <author>\n      <name>Ekaterina Kochmar</name>\n    </author>\n  </entry>"
}