Research

Paper

AI LLM March 03, 2026

Code2Math: Can Your Code Agent Effectively Evolve Math Problems Through Exploration?

Authors

Dadi Guo, Yuejin Xie, Qingyu Liu, Jiayu Liu, Zhiyuan Fan, Qihan Ren, Shuai Shao, Tianyi Zhou, Dongrui Liu, Yi R. Fung

Abstract

As large language models (LLMs) advance their mathematical capabilities toward the IMO level, the scarcity of challenging, high-quality problems for training and evaluation has become a significant bottleneck. Simultaneously, recent code agents have demonstrated sophisticated skills in agentic coding and reasoning, suggesting that code execution can serve as a scalable environment for mathematical experimentation. In this paper, we investigate the potential of code agents to autonomously evolve existing math problems into more complex variations. We introduce a multi-agent framework designed to perform problem evolution while validating the solvability and increased difficulty of the generated problems. Our experiments demonstrate that, given sufficient test-time exploration, code agents can synthesize new, solvable problems that are structurally distinct from and more challenging than the originals. This work provides empirical evidence that code-driven agents can serve as a viable mechanism for synthesizing high-difficulty mathematical reasoning problems within scalable computational environments. Our data is available at https://github.com/TarferSoul/Code2Math.

Metadata

arXiv ID: 2603.03202
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-03
Fetched: 2026-03-04 03:41

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.03202v1</id>\n    <title>Code2Math: Can Your Code Agent Effectively Evolve Math Problems Through Exploration?</title>\n    <updated>2026-03-03T17:55:10Z</updated>\n    <link href='https://arxiv.org/abs/2603.03202v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.03202v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>As large language models (LLMs) advance their mathematical capabilities toward the IMO level, the scarcity of challenging, high-quality problems for training and evaluation has become a significant bottleneck. Simultaneously, recent code agents have demonstrated sophisticated skills in agentic coding and reasoning, suggesting that code execution can serve as a scalable environment for mathematical experimentation. In this paper, we investigate the potential of code agents to autonomously evolve existing math problems into more complex variations. We introduce a multi-agent framework designed to perform problem evolution while validating the solvability and increased difficulty of the generated problems. Our experiments demonstrate that, given sufficient test-time exploration, code agents can synthesize new, solvable problems that are structurally distinct from and more challenging than the originals. This work provides empirical evidence that code-driven agents can serve as a viable mechanism for synthesizing high-difficulty mathematical reasoning problems within scalable computational environments. Our data is available at https://github.com/TarferSoul/Code2Math.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-03T17:55:10Z</published>\n    <arxiv:comment>Under review in ICML 2026</arxiv:comment>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Dadi Guo</name>\n    </author>\n    <author>\n      <name>Yuejin Xie</name>\n    </author>\n    <author>\n      <name>Qingyu Liu</name>\n    </author>\n    <author>\n      <name>Jiayu Liu</name>\n    </author>\n    <author>\n      <name>Zhiyuan Fan</name>\n    </author>\n    <author>\n      <name>Qihan Ren</name>\n    </author>\n    <author>\n      <name>Shuai Shao</name>\n    </author>\n    <author>\n      <name>Tianyi Zhou</name>\n    </author>\n    <author>\n      <name>Dongrui Liu</name>\n    </author>\n    <author>\n      <name>Yi R. Fung</name>\n    </author>\n  </entry>"
}