Paper
Reasoning over mathematical objects: on-policy reward modeling and test time aggregation
Authors
Pranjal Aggarwal, Marjan Ghazvininejad, Seungone Kim, Ilia Kulikov, Jack Lanchantin, Xian Li, Tianjian Li, Bo Liu, Graham Neubig, Anaelia Ovalle, Swarnadeep Saha, Sainbayar Sukhbaatar, Sean Welleck, Jason Weston, Chenxi Whitehouse, Adina Williams, Jing Xu, Ping Yu, Weizhe Yuan, Jingyu Zhang, Wenting Zhao
Abstract
The ability to precisely derive mathematical objects is a core requirement for downstream STEM applications, including mathematics, physics, and chemistry, where reasoning must culminate in formally structured expressions. Yet, current LM evaluations of mathematical and scientific reasoning rely heavily on simplified answer formats such as numerical values or multiple choice options due to the convenience of automated assessment. In this paper we provide three contributions for improving reasoning over mathematical objects: (i) we build and release training data and benchmarks for deriving mathematical objects, the Principia suite; (ii) we provide training recipes with strong LLM-judges and verifiers, where we show that on-policy judge training boosts performance; (iii) we show how on-policy training can also be used to scale test-time compute via aggregation. We find that strong LMs such as Qwen3-235B and o3 struggle on Principia, while our training recipes can bring significant improvements over different LLM backbones, while simultaneously improving results on existing numerical and MCQA tasks, demonstrating cross-format generalization of reasoning abilities.
Metadata
Related papers
Vibe Coding XR: Accelerating AI + XR Prototyping with XR Blocks and Gemini
Ruofei Du, Benjamin Hersh, David Li, Nels Numan, Xun Qian, Yanhe Chen, Zhongy... • 2026-03-25
Comparing Developer and LLM Biases in Code Evaluation
Aditya Mittal, Ryan Shar, Zichu Wu, Shyam Agarwal, Tongshuang Wu, Chris Donah... • 2026-03-25
The Stochastic Gap: A Markovian Framework for Pre-Deployment Reliability and Oversight-Cost Auditing in Agentic Artificial Intelligence
Biplab Pal, Santanu Bhattacharya • 2026-03-25
Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA
Saahil Mathur, Ryan David Rittner, Vedant Ajit Thakur, Daniel Stuart Schiff, ... • 2026-03-25
MARCH: Multi-Agent Reinforced Self-Check for LLM Hallucination
Zhuo Li, Yupeng Zhang, Pengyu Cheng, Jiajun Song, Mengyu Zhou, Hao Li, Shujie... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.18886v1</id>\n <title>Reasoning over mathematical objects: on-policy reward modeling and test time aggregation</title>\n <updated>2026-03-19T13:27:12Z</updated>\n <link href='https://arxiv.org/abs/2603.18886v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.18886v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>The ability to precisely derive mathematical objects is a core requirement for downstream STEM applications, including mathematics, physics, and chemistry, where reasoning must culminate in formally structured expressions. Yet, current LM evaluations of mathematical and scientific reasoning rely heavily on simplified answer formats such as numerical values or multiple choice options due to the convenience of automated assessment. In this paper we provide three contributions for improving reasoning over mathematical objects: (i) we build and release training data and benchmarks for deriving mathematical objects, the Principia suite; (ii) we provide training recipes with strong LLM-judges and verifiers, where we show that on-policy judge training boosts performance; (iii) we show how on-policy training can also be used to scale test-time compute via aggregation. We find that strong LMs such as Qwen3-235B and o3 struggle on Principia, while our training recipes can bring significant improvements over different LLM backbones, while simultaneously improving results on existing numerical and MCQA tasks, demonstrating cross-format generalization of reasoning abilities.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n <published>2026-03-19T13:27:12Z</published>\n <arxiv:primary_category term='cs.AI'/>\n <author>\n <name>Pranjal Aggarwal</name>\n </author>\n <author>\n <name>Marjan Ghazvininejad</name>\n </author>\n <author>\n <name>Seungone Kim</name>\n </author>\n <author>\n <name>Ilia Kulikov</name>\n </author>\n <author>\n <name>Jack Lanchantin</name>\n </author>\n <author>\n <name>Xian Li</name>\n </author>\n <author>\n <name>Tianjian Li</name>\n </author>\n <author>\n <name>Bo Liu</name>\n </author>\n <author>\n <name>Graham Neubig</name>\n </author>\n <author>\n <name>Anaelia Ovalle</name>\n </author>\n <author>\n <name>Swarnadeep Saha</name>\n </author>\n <author>\n <name>Sainbayar Sukhbaatar</name>\n </author>\n <author>\n <name>Sean Welleck</name>\n </author>\n <author>\n <name>Jason Weston</name>\n </author>\n <author>\n <name>Chenxi Whitehouse</name>\n </author>\n <author>\n <name>Adina Williams</name>\n </author>\n <author>\n <name>Jing Xu</name>\n </author>\n <author>\n <name>Ping Yu</name>\n </author>\n <author>\n <name>Weizhe Yuan</name>\n </author>\n <author>\n <name>Jingyu Zhang</name>\n </author>\n <author>\n <name>Wenting Zhao</name>\n </author>\n </entry>"
}