Research

Paper

AI LLM March 03, 2026

From Solver to Tutor: Evaluating the Pedagogical Intelligence of LLMs with KMP-Bench

Authors

Weikang Shi, Houxing Ren, Junting Pan, Aojun Zhou, Ke Wang, Zimu Lu, Yunqiao Yang, Yuxuan Hu, Linda Wei, Mingjie Zhan, Hongsheng Li

Abstract

Large Language Models (LLMs) show significant potential in AI mathematical tutoring, yet current evaluations often rely on simplistic metrics or narrow pedagogical scenarios, failing to assess comprehensive, multi-turn teaching effectiveness. In this paper, we introduce KMP-Bench, a comprehensive K-8 Mathematical Pedagogical Benchmark designed to assess LLMs from two complementary perspectives. The first module, KMP-Dialogue, evaluates holistic pedagogical capabilities against six core principles (e.g., Challenge, Explanation, Feedback), leveraging a novel multi-turn dialogue dataset constructed by weaving together diverse pedagogical components. The second module, KMP-Skills, provides a granular assessment of foundational tutoring abilities, including multi-turn problem-solving, error detection and correction, and problem generation. Our evaluations on KMP-Bench reveal a key disparity: while leading LLMs excel at tasks with verifiable solutions, they struggle with the nuanced application of pedagogical principles. Additionally, we present KMP-Pile, a large-scale (150K) dialogue dataset. Models fine-tuned on KMP-Pile show substantial improvement on KMP-Bench, underscoring the value of pedagogically-rich training data for developing more effective AI math tutors.

Metadata

arXiv ID: 2603.02775
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-03
Fetched: 2026-03-04 03:41

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.02775v1</id>\n    <title>From Solver to Tutor: Evaluating the Pedagogical Intelligence of LLMs with KMP-Bench</title>\n    <updated>2026-03-03T09:14:57Z</updated>\n    <link href='https://arxiv.org/abs/2603.02775v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.02775v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large Language Models (LLMs) show significant potential in AI mathematical tutoring, yet current evaluations often rely on simplistic metrics or narrow pedagogical scenarios, failing to assess comprehensive, multi-turn teaching effectiveness. In this paper, we introduce KMP-Bench, a comprehensive K-8 Mathematical Pedagogical Benchmark designed to assess LLMs from two complementary perspectives. The first module, KMP-Dialogue, evaluates holistic pedagogical capabilities against six core principles (e.g., Challenge, Explanation, Feedback), leveraging a novel multi-turn dialogue dataset constructed by weaving together diverse pedagogical components. The second module, KMP-Skills, provides a granular assessment of foundational tutoring abilities, including multi-turn problem-solving, error detection and correction, and problem generation. Our evaluations on KMP-Bench reveal a key disparity: while leading LLMs excel at tasks with verifiable solutions, they struggle with the nuanced application of pedagogical principles. Additionally, we present KMP-Pile, a large-scale (150K) dialogue dataset. Models fine-tuned on KMP-Pile show substantial improvement on KMP-Bench, underscoring the value of pedagogically-rich training data for developing more effective AI math tutors.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-03-03T09:14:57Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Weikang Shi</name>\n    </author>\n    <author>\n      <name>Houxing Ren</name>\n    </author>\n    <author>\n      <name>Junting Pan</name>\n    </author>\n    <author>\n      <name>Aojun Zhou</name>\n    </author>\n    <author>\n      <name>Ke Wang</name>\n    </author>\n    <author>\n      <name>Zimu Lu</name>\n    </author>\n    <author>\n      <name>Yunqiao Yang</name>\n    </author>\n    <author>\n      <name>Yuxuan Hu</name>\n    </author>\n    <author>\n      <name>Linda Wei</name>\n    </author>\n    <author>\n      <name>Mingjie Zhan</name>\n    </author>\n    <author>\n      <name>Hongsheng Li</name>\n    </author>\n  </entry>"
}