Research

Paper

TESTING March 12, 2026

TopoBench: Benchmarking LLMs on Hard Topological Reasoning

Authors

Mayug Maniparambil, Nils Hoehing, Janak Kapuriya, Arjun Karuvally, Ellen Rushe, Anthony Ventresque, Noel O'Connor, Fergal Reid

Abstract

Solving topological grid puzzles requires reasoning over global spatial invariants such as connectivity, loop closure, and region symmetry and remains challenging for even the most powerful large language models (LLMs). To study these abilities under controlled settings, we introduce TopoBench, a benchmark of six puzzle families across three difficulty levels. We evaluate strong reasoning LLMs on TopoBench and find that even frontier models solve fewer than one quarter of hard instances, with two families nearly unsolved. To investigate whether these failures stem from reasoning limitations or from difficulty extracting and maintaining spatial constraints, we annotate 750 chain of thought traces with an error taxonomy that surfaces four candidate causal failure modes, then test them with targeted interventions simulating each error type. These interventions show that certain error patterns like premature commitment and constraint forgetting have a direct impact on the ability to solve the puzzle, while repeated reasoning is a benign effect of search. Finally we study mitigation strategies including prompt guidance, cell-aligned grid representations and tool-based constraint checking, finding that the bottleneck lies in extracting constraints from spatial representations and not in reasoning over them. Code and data are available at github.com/mayug/topobench-benchmark.

Metadata

arXiv ID: 2603.12133
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-03-12
Fetched: 2026-03-13 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.12133v1</id>\n    <title>TopoBench: Benchmarking LLMs on Hard Topological Reasoning</title>\n    <updated>2026-03-12T16:37:21Z</updated>\n    <link href='https://arxiv.org/abs/2603.12133v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.12133v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Solving topological grid puzzles requires reasoning over global spatial invariants such as connectivity, loop closure, and region symmetry and remains challenging for even the most powerful large language models (LLMs). To study these abilities under controlled settings, we introduce TopoBench, a benchmark of six puzzle families across three difficulty levels. We evaluate strong reasoning LLMs on TopoBench and find that even frontier models solve fewer than one quarter of hard instances, with two families nearly unsolved. To investigate whether these failures stem from reasoning limitations or from difficulty extracting and maintaining spatial constraints, we annotate 750 chain of thought traces with an error taxonomy that surfaces four candidate causal failure modes, then test them with targeted interventions simulating each error type. These interventions show that certain error patterns like premature commitment and constraint forgetting have a direct impact on the ability to solve the puzzle, while repeated reasoning is a benign effect of search. Finally we study mitigation strategies including prompt guidance, cell-aligned grid representations and tool-based constraint checking, finding that the bottleneck lies in extracting constraints from spatial representations and not in reasoning over them. Code and data are available at github.com/mayug/topobench-benchmark.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-12T16:37:21Z</published>\n    <arxiv:comment>Accepted, Workshop on Logical Reasoning of Large Language Models at ICLR 2026</arxiv:comment>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Mayug Maniparambil</name>\n    </author>\n    <author>\n      <name>Nils Hoehing</name>\n    </author>\n    <author>\n      <name>Janak Kapuriya</name>\n    </author>\n    <author>\n      <name>Arjun Karuvally</name>\n    </author>\n    <author>\n      <name>Ellen Rushe</name>\n    </author>\n    <author>\n      <name>Anthony Ventresque</name>\n    </author>\n    <author>\n      <name>Noel O'Connor</name>\n    </author>\n    <author>\n      <name>Fergal Reid</name>\n    </author>\n  </entry>"
}