Research

Paper

TESTING March 10, 2026

EXPLORE-Bench: Egocentric Scene Prediction with Long-Horizon Reasoning

Authors

Chengjun Yu, Xuhan Zhu, Chaoqun Du, Pengfei Yu, Wei Zhai, Yang Cao, Zheng-Jun Zha

Abstract

Multimodal large language models (MLLMs) are increasingly considered as a foundation for embodied agents, yet it remains unclear whether they can reliably reason about the long-term physical consequences of actions from an egocentric viewpoint. We study this gap through a new task, Egocentric Scene Prediction with LOng-horizon REasoning: given an initial-scene image and a sequence of atomic action descriptions, a model is asked to predict the final scene after all actions are executed. To enable systematic evaluation, we introduce EXPLORE-Bench, a benchmark curated from real first-person videos spanning diverse scenarios. Each instance pairs long action sequences with structured final-scene annotations, including object categories, visual attributes, and inter-object relations, which supports fine-grained, quantitative assessment. Experiments on a range of proprietary and open-source MLLMs reveal a significant performance gap to humans, indicating that long-horizon egocentric reasoning remains a major challenge. We further analyze test-time scaling via stepwise reasoning and show that decomposing long action sequences can improve performance to some extent, while incurring non-trivial computational overhead. Overall, EXPLORE-Bench provides a principled testbed for measuring and advancing long-horizon reasoning for egocentric embodied perception.

Metadata

arXiv ID: 2603.09731
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-10
Fetched: 2026-03-11 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.09731v1</id>\n    <title>EXPLORE-Bench: Egocentric Scene Prediction with Long-Horizon Reasoning</title>\n    <updated>2026-03-10T14:33:44Z</updated>\n    <link href='https://arxiv.org/abs/2603.09731v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.09731v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Multimodal large language models (MLLMs) are increasingly considered as a foundation for embodied agents, yet it remains unclear whether they can reliably reason about the long-term physical consequences of actions from an egocentric viewpoint. We study this gap through a new task, Egocentric Scene Prediction with LOng-horizon REasoning: given an initial-scene image and a sequence of atomic action descriptions, a model is asked to predict the final scene after all actions are executed. To enable systematic evaluation, we introduce EXPLORE-Bench, a benchmark curated from real first-person videos spanning diverse scenarios. Each instance pairs long action sequences with structured final-scene annotations, including object categories, visual attributes, and inter-object relations, which supports fine-grained, quantitative assessment. Experiments on a range of proprietary and open-source MLLMs reveal a significant performance gap to humans, indicating that long-horizon egocentric reasoning remains a major challenge. We further analyze test-time scaling via stepwise reasoning and show that decomposing long action sequences can improve performance to some extent, while incurring non-trivial computational overhead. Overall, EXPLORE-Bench provides a principled testbed for measuring and advancing long-horizon reasoning for egocentric embodied perception.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-10T14:33:44Z</published>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Chengjun Yu</name>\n    </author>\n    <author>\n      <name>Xuhan Zhu</name>\n    </author>\n    <author>\n      <name>Chaoqun Du</name>\n    </author>\n    <author>\n      <name>Pengfei Yu</name>\n    </author>\n    <author>\n      <name>Wei Zhai</name>\n    </author>\n    <author>\n      <name>Yang Cao</name>\n    </author>\n    <author>\n      <name>Zheng-Jun Zha</name>\n    </author>\n  </entry>"
}