Research

Paper

AI LLM March 20, 2026

On the Ability of Transformers to Verify Plans

Authors

Yash Sarrof, Yupei Du, Katharina Stein, Alexander Koller, Sylvie Thiébaux, Michael Hahn

Abstract

Transformers have shown inconsistent success in AI planning tasks, and theoretical understanding of when generalization should be expected has been limited. We take important steps towards addressing this gap by analyzing the ability of decoder-only models to verify whether a given plan correctly solves a given planning instance. To analyse the general setting where the number of objects -- and thus the effective input alphabet -- grows at test time, we introduce C*-RASP, an extension of C-RASP designed to establish length generalization guarantees for transformers under the simultaneous growth in sequence length and vocabulary size. Our results identify a large class of classical planning domains for which transformers can provably learn to verify long plans, and structural properties that significantly affects the learnability of length generalizable solutions. Empirical experiments corroborate our theory.

Metadata

arXiv ID: 2603.19954
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-03-20
Fetched: 2026-03-23 16:54

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.19954v1</id>\n    <title>On the Ability of Transformers to Verify Plans</title>\n    <updated>2026-03-20T13:55:29Z</updated>\n    <link href='https://arxiv.org/abs/2603.19954v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.19954v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Transformers have shown inconsistent success in AI planning tasks, and theoretical understanding of when generalization should be expected has been limited. We take important steps towards addressing this gap by analyzing the ability of decoder-only models to verify whether a given plan correctly solves a given planning instance. To analyse the general setting where the number of objects -- and thus the effective input alphabet -- grows at test time, we introduce C*-RASP, an extension of C-RASP designed to establish length generalization guarantees for transformers under the simultaneous growth in sequence length and vocabulary size. Our results identify a large class of classical planning domains for which transformers can provably learn to verify long plans, and structural properties that significantly affects the learnability of length generalizable solutions. Empirical experiments corroborate our theory.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-03-20T13:55:29Z</published>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Yash Sarrof</name>\n    </author>\n    <author>\n      <name>Yupei Du</name>\n    </author>\n    <author>\n      <name>Katharina Stein</name>\n    </author>\n    <author>\n      <name>Alexander Koller</name>\n    </author>\n    <author>\n      <name>Sylvie Thiébaux</name>\n    </author>\n    <author>\n      <name>Michael Hahn</name>\n    </author>\n  </entry>"
}