Research

Paper

TESTING March 12, 2026

Spatial-TTT: Streaming Visual-based Spatial Intelligence with Test-Time Training

Authors

Fangfu Liu, Diankun Wu, Jiawei Chi, Yimo Cai, Yi-Hsin Hung, Xumin Yu, Hao Li, Han Hu, Yongming Rao, Yueqi Duan

Abstract

Humans perceive and understand real-world spaces through a stream of visual observations. Therefore, the ability to streamingly maintain and update spatial evidence from potentially unbounded video streams is essential for spatial intelligence. The core challenge is not simply longer context windows but how spatial information is selected, organized, and retained over time. In this paper, we propose Spatial-TTT towards streaming visual-based spatial intelligence with test-time training (TTT), which adapts a subset of parameters (fast weights) to capture and organize spatial evidence over long-horizon scene videos. Specifically, we design a hybrid architecture and adopt large-chunk updates parallel with sliding-window attention for efficient spatial video processing. To further promote spatial awareness, we introduce a spatial-predictive mechanism applied to TTT layers with 3D spatiotemporal convolution, which encourages the model to capture geometric correspondence and temporal continuity across frames. Beyond architecture design, we construct a dataset with dense 3D spatial descriptions, which guides the model to update its fast weights to memorize and organize global 3D spatial signals in a structured manner. Extensive experiments demonstrate that Spatial-TTT improves long-horizon spatial understanding and achieves state-of-the-art performance on video spatial benchmarks. Project page: https://liuff19.github.io/Spatial-TTT.

Metadata

arXiv ID: 2603.12255
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-12
Fetched: 2026-03-13 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.12255v1</id>\n    <title>Spatial-TTT: Streaming Visual-based Spatial Intelligence with Test-Time Training</title>\n    <updated>2026-03-12T17:58:58Z</updated>\n    <link href='https://arxiv.org/abs/2603.12255v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.12255v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Humans perceive and understand real-world spaces through a stream of visual observations. Therefore, the ability to streamingly maintain and update spatial evidence from potentially unbounded video streams is essential for spatial intelligence. The core challenge is not simply longer context windows but how spatial information is selected, organized, and retained over time. In this paper, we propose Spatial-TTT towards streaming visual-based spatial intelligence with test-time training (TTT), which adapts a subset of parameters (fast weights) to capture and organize spatial evidence over long-horizon scene videos. Specifically, we design a hybrid architecture and adopt large-chunk updates parallel with sliding-window attention for efficient spatial video processing. To further promote spatial awareness, we introduce a spatial-predictive mechanism applied to TTT layers with 3D spatiotemporal convolution, which encourages the model to capture geometric correspondence and temporal continuity across frames. Beyond architecture design, we construct a dataset with dense 3D spatial descriptions, which guides the model to update its fast weights to memorize and organize global 3D spatial signals in a structured manner. Extensive experiments demonstrate that Spatial-TTT improves long-horizon spatial understanding and achieves state-of-the-art performance on video spatial benchmarks. Project page: https://liuff19.github.io/Spatial-TTT.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-03-12T17:58:58Z</published>\n    <arxiv:comment>Project Page: https://liuff19.github.io/Spatial-TTT</arxiv:comment>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Fangfu Liu</name>\n    </author>\n    <author>\n      <name>Diankun Wu</name>\n    </author>\n    <author>\n      <name>Jiawei Chi</name>\n    </author>\n    <author>\n      <name>Yimo Cai</name>\n    </author>\n    <author>\n      <name>Yi-Hsin Hung</name>\n    </author>\n    <author>\n      <name>Xumin Yu</name>\n    </author>\n    <author>\n      <name>Hao Li</name>\n    </author>\n    <author>\n      <name>Han Hu</name>\n    </author>\n    <author>\n      <name>Yongming Rao</name>\n    </author>\n    <author>\n      <name>Yueqi Duan</name>\n    </author>\n  </entry>"
}