Research

Paper

AI LLM February 20, 2026

VeriSoftBench: Repository-Scale Formal Verification Benchmarks for Lean

Authors

Yutong Xin, Qiaochu Chen, Greg Durrett, Işil Dillig

Abstract

Large language models have achieved striking results in interactive theorem proving, particularly in Lean. However, most benchmarks for LLM-based proof automation are drawn from mathematics in the Mathlib ecosystem, whereas proofs in software verification are developed inside definition-rich codebases with substantial project-specific libraries. We introduce VeriSoftBench, a benchmark of 500 Lean 4 proof obligations drawn from open-source formal-methods developments and packaged to preserve realistic repository context and cross-file dependencies. Our evaluation of frontier LLMs and specialized provers yields three observations. First, provers tuned for Mathlib-style mathematics transfer poorly to this repository-centric setting. Second, success is strongly correlated with transitive repository dependence: tasks whose proofs draw on large, multi-hop dependency closures are less likely to be solved. Third, providing curated context restricted to a proof's dependency closure improves performance relative to exposing the full repository, but nevertheless leaves substantial room for improvement. Our benchmark and evaluation suite are released at https://github.com/utopia-group/VeriSoftBench.

Metadata

arXiv ID: 2602.18307
Provider: ARXIV
Primary Category: cs.SE
Published: 2026-02-20
Fetched: 2026-02-23 05:33

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.18307v1</id>\n    <title>VeriSoftBench: Repository-Scale Formal Verification Benchmarks for Lean</title>\n    <updated>2026-02-20T16:05:06Z</updated>\n    <link href='https://arxiv.org/abs/2602.18307v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.18307v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large language models have achieved striking results in interactive theorem proving, particularly in Lean. However, most benchmarks for LLM-based proof automation are drawn from mathematics in the Mathlib ecosystem, whereas proofs in software verification are developed inside definition-rich codebases with substantial project-specific libraries. We introduce VeriSoftBench, a benchmark of 500 Lean 4 proof obligations drawn from open-source formal-methods developments and packaged to preserve realistic repository context and cross-file dependencies. Our evaluation of frontier LLMs and specialized provers yields three observations. First, provers tuned for Mathlib-style mathematics transfer poorly to this repository-centric setting. Second, success is strongly correlated with transitive repository dependence: tasks whose proofs draw on large, multi-hop dependency closures are less likely to be solved. Third, providing curated context restricted to a proof's dependency closure improves performance relative to exposing the full repository, but nevertheless leaves substantial room for improvement. Our benchmark and evaluation suite are released at https://github.com/utopia-group/VeriSoftBench.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.SE'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.PL'/>\n    <published>2026-02-20T16:05:06Z</published>\n    <arxiv:primary_category term='cs.SE'/>\n    <author>\n      <name>Yutong Xin</name>\n    </author>\n    <author>\n      <name>Qiaochu Chen</name>\n    </author>\n    <author>\n      <name>Greg Durrett</name>\n    </author>\n    <author>\n      <name>Işil Dillig</name>\n    </author>\n  </entry>"
}