Research

Paper

AI LLM February 24, 2026

Pipeline for Verifying LLM-Generated Mathematical Solutions

Authors

Varvara Sazonova, Dmitri Shmelkin, Stanislav Kikot, Vasily Motolygin

Abstract

With the growing popularity of Large Reasoning Models and their results in solving mathematical problems, it becomes crucial to measure their capabilities. We introduce a pipeline for both automatic and interactive verification as a more accurate alternative to only checking the answer which is currently the most popular approach for benchmarks. The pipeline can also be used as a generator of correct solutions both in formal and informal languages. 3 AI agents, which can be chosen for the benchmark accordingly, are included in the structure. The key idea is the use of prompts to obtain the solution in the specific form which allows for easier verification using proof assistants and possible use of small models ($\le 8B$). Experiments on several datasets suggest low probability of False Positives. The open-source implementation with instructions on setting up a server is available at https://github.com/LogicEnj/lean4_verification_pipeline.

Metadata

arXiv ID: 2602.20770
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-02-24
Fetched: 2026-02-25 06:05

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.20770v1</id>\n    <title>Pipeline for Verifying LLM-Generated Mathematical Solutions</title>\n    <updated>2026-02-24T11:01:25Z</updated>\n    <link href='https://arxiv.org/abs/2602.20770v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.20770v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>With the growing popularity of Large Reasoning Models and their results in solving mathematical problems, it becomes crucial to measure their capabilities. We introduce a pipeline for both automatic and interactive verification as a more accurate alternative to only checking the answer which is currently the most popular approach for benchmarks. The pipeline can also be used as a generator of correct solutions both in formal and informal languages. 3 AI agents, which can be chosen for the benchmark accordingly, are included in the structure. The key idea is the use of prompts to obtain the solution in the specific form which allows for easier verification using proof assistants and possible use of small models ($\\le 8B$). Experiments on several datasets suggest low probability of False Positives. The open-source implementation with instructions on setting up a server is available at https://github.com/LogicEnj/lean4_verification_pipeline.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-02-24T11:01:25Z</published>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Varvara Sazonova</name>\n    </author>\n    <author>\n      <name>Dmitri Shmelkin</name>\n    </author>\n    <author>\n      <name>Stanislav Kikot</name>\n    </author>\n    <author>\n      <name>Vasily Motolygin</name>\n    </author>\n  </entry>"
}