Research

Paper

AI LLM March 23, 2026

Mind over Space: Can Multimodal Large Language Models Mentally Navigate?

Authors

Qihui Zhu, Shouwei Ruan, Xiao Yang, Hao Jiang, Yao Huang, Shiji Zhao, Hanwei Fan, Hang Su, Xingxing Wei

Abstract

Despite the widespread adoption of MLLMs in embodied agents, their capabilities remain largely confined to reactive planning from immediate observations, consistently failing in spatial reasoning across extensive spatiotemporal scales. Cognitive science reveals that Biological Intelligence (BI) thrives on "mental navigation": the strategic construction of spatial representations from experience and the subsequent mental simulation of paths prior to action. To bridge the gap between AI and BI, we introduce Video2Mental, a pioneering benchmark for evaluating the mental navigation capabilities of MLLMs. The task requires constructing hierarchical cognitive maps from long egocentric videos and generating landmark-based path plans step by step, with planning accuracy verified through simulator-based physical interaction. Our benchmarking results reveal that mental navigation capability does not naturally emerge from standard pre-training. Frontier MLLMs struggle profoundly with zero-shot structured spatial representation, and their planning accuracy decays precipitously over extended horizons. To overcome this, we propose \textbf{NavMind}, a reasoning model that internalizes mental navigation using explicit, fine-grained cognitive maps as learnable intermediate representations. Through a difficulty-stratified progressive supervised fine-tuning paradigm, NavMind effectively bridges the gap between raw perception and structured planning. Experiments demonstrate that NavMind achieves superior mental navigation capabilities, significantly outperforming frontier commercial and spatial MLLMs.

Metadata

arXiv ID: 2603.21577
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-03-23
Fetched: 2026-03-24 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.21577v1</id>\n    <title>Mind over Space: Can Multimodal Large Language Models Mentally Navigate?</title>\n    <updated>2026-03-23T04:59:25Z</updated>\n    <link href='https://arxiv.org/abs/2603.21577v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.21577v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Despite the widespread adoption of MLLMs in embodied agents, their capabilities remain largely confined to reactive planning from immediate observations, consistently failing in spatial reasoning across extensive spatiotemporal scales. Cognitive science reveals that Biological Intelligence (BI) thrives on \"mental navigation\": the strategic construction of spatial representations from experience and the subsequent mental simulation of paths prior to action. To bridge the gap between AI and BI, we introduce Video2Mental, a pioneering benchmark for evaluating the mental navigation capabilities of MLLMs. The task requires constructing hierarchical cognitive maps from long egocentric videos and generating landmark-based path plans step by step, with planning accuracy verified through simulator-based physical interaction. Our benchmarking results reveal that mental navigation capability does not naturally emerge from standard pre-training. Frontier MLLMs struggle profoundly with zero-shot structured spatial representation, and their planning accuracy decays precipitously over extended horizons. To overcome this, we propose \\textbf{NavMind}, a reasoning model that internalizes mental navigation using explicit, fine-grained cognitive maps as learnable intermediate representations. Through a difficulty-stratified progressive supervised fine-tuning paradigm, NavMind effectively bridges the gap between raw perception and structured planning. Experiments demonstrate that NavMind achieves superior mental navigation capabilities, significantly outperforming frontier commercial and spatial MLLMs.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-23T04:59:25Z</published>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Qihui Zhu</name>\n    </author>\n    <author>\n      <name>Shouwei Ruan</name>\n    </author>\n    <author>\n      <name>Xiao Yang</name>\n    </author>\n    <author>\n      <name>Hao Jiang</name>\n    </author>\n    <author>\n      <name>Yao Huang</name>\n    </author>\n    <author>\n      <name>Shiji Zhao</name>\n    </author>\n    <author>\n      <name>Hanwei Fan</name>\n    </author>\n    <author>\n      <name>Hang Su</name>\n    </author>\n    <author>\n      <name>Xingxing Wei</name>\n    </author>\n  </entry>"
}