Research

Paper

AI LLM March 13, 2026

TaoBench: Do Automated Theorem Prover LLMs Generalize Beyond MathLib?

Authors

Alexander K Taylor, Junyi Zhang, Ethan Ji, Vigyan Sahai, Haikang Deng, Yuanzhou Chen, Yifan Yuan, Di Wu, Jia-Chen Gu, Kai-Wei Chang, Nanyun Peng, Amit Sahai, Wei Wang

Abstract

Automated theorem proving (ATP) benchmarks largely consist of problems formalized in MathLib, so current ATP training and evaluation are heavily biased toward MathLib's definitional framework. However, frontier mathematics is often exploratory and prototype-heavy, relying on bespoke constructions that deviate from standard libraries. In this work, we evaluate the robustness of current ATP systems when applied to a novel definitional framework, specifically examining the performance gap between standard library problems and bespoke mathematical constructions. We introduce TaoBench, an undergraduate-level benchmark derived from Terence Tao's Analysis I, which formalizes analysis by constructing core mathematical concepts from scratch, without relying on standard Mathlib definitions, as well as by mixing from-scratch and MathLib constructions. For fair evaluation, we build an agentic pipeline that automatically extracts a compilable, self-contained local environment for each problem. To isolate the effect of definitional frameworks, we additionally translate every problem into a mathematically equivalent Mathlib formulation, yielding paired TaoBench-Mathlib statements for direct comparison. While state-of-the-art ATP models perform capably within the MathLib framework, performance drops by an average of roughly 26% on the definitionally equivalent Tao formulation. This indicates that the main bottleneck is limited generalization across definitional frameworks rather than task difficulty. TaoBench thus highlights a gap between benchmark performance and applicability, and provides a concrete foundation for developing and testing provers better aligned with research mathematics.

Metadata

arXiv ID: 2603.12744
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-13
Fetched: 2026-03-16 06:01

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.12744v1</id>\n    <title>TaoBench: Do Automated Theorem Prover LLMs Generalize Beyond MathLib?</title>\n    <updated>2026-03-13T07:39:47Z</updated>\n    <link href='https://arxiv.org/abs/2603.12744v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.12744v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Automated theorem proving (ATP) benchmarks largely consist of problems formalized in MathLib, so current ATP training and evaluation are heavily biased toward MathLib's definitional framework. However, frontier mathematics is often exploratory and prototype-heavy, relying on bespoke constructions that deviate from standard libraries. In this work, we evaluate the robustness of current ATP systems when applied to a novel definitional framework, specifically examining the performance gap between standard library problems and bespoke mathematical constructions. We introduce TaoBench, an undergraduate-level benchmark derived from Terence Tao's Analysis I, which formalizes analysis by constructing core mathematical concepts from scratch, without relying on standard Mathlib definitions, as well as by mixing from-scratch and MathLib constructions. For fair evaluation, we build an agentic pipeline that automatically extracts a compilable, self-contained local environment for each problem. To isolate the effect of definitional frameworks, we additionally translate every problem into a mathematically equivalent Mathlib formulation, yielding paired TaoBench-Mathlib statements for direct comparison. While state-of-the-art ATP models perform capably within the MathLib framework, performance drops by an average of roughly 26% on the definitionally equivalent Tao formulation. This indicates that the main bottleneck is limited generalization across definitional frameworks rather than task difficulty. TaoBench thus highlights a gap between benchmark performance and applicability, and provides a concrete foundation for developing and testing provers better aligned with research mathematics.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LO'/>\n    <published>2026-03-13T07:39:47Z</published>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Alexander K Taylor</name>\n    </author>\n    <author>\n      <name>Junyi Zhang</name>\n    </author>\n    <author>\n      <name>Ethan Ji</name>\n    </author>\n    <author>\n      <name>Vigyan Sahai</name>\n    </author>\n    <author>\n      <name>Haikang Deng</name>\n    </author>\n    <author>\n      <name>Yuanzhou Chen</name>\n    </author>\n    <author>\n      <name>Yifan Yuan</name>\n    </author>\n    <author>\n      <name>Di Wu</name>\n    </author>\n    <author>\n      <name>Jia-Chen Gu</name>\n    </author>\n    <author>\n      <name>Kai-Wei Chang</name>\n    </author>\n    <author>\n      <name>Nanyun Peng</name>\n    </author>\n    <author>\n      <name>Amit Sahai</name>\n    </author>\n    <author>\n      <name>Wei Wang</name>\n    </author>\n  </entry>"
}