Paper
tttLRM: Test-Time Training for Long Context and Autoregressive 3D Reconstruction
Authors
Chen Wang, Hao Tan, Wang Yifan, Zhiqin Chen, Yuheng Liu, Kalyan Sunkavalli, Sai Bi, Lingjie Liu, Yiwei Hu
Abstract
We propose tttLRM, a novel large 3D reconstruction model that leverages a Test-Time Training (TTT) layer to enable long-context, autoregressive 3D reconstruction with linear computational complexity, further scaling the model's capability. Our framework efficiently compresses multiple image observations into the fast weights of the TTT layer, forming an implicit 3D representation in the latent space that can be decoded into various explicit formats, such as Gaussian Splats (GS) for downstream applications. The online learning variant of our model supports progressive 3D reconstruction and refinement from streaming observations. We demonstrate that pretraining on novel view synthesis tasks effectively transfers to explicit 3D modeling, resulting in improved reconstruction quality and faster convergence. Extensive experiments show that our method achieves superior performance in feedforward 3D Gaussian reconstruction compared to state-of-the-art approaches on both objects and scenes.
Metadata
Related papers
Fractal universe and quantum gravity made simple
Fabio Briscese, Gianluca Calcagni • 2026-03-25
POLY-SIM: Polyglot Speaker Identification with Missing Modality Grand Challenge 2026 Evaluation Plan
Marta Moscati, Muhammad Saad Saeed, Marina Zanoni, Mubashir Noman, Rohan Kuma... • 2026-03-25
LensWalk: Agentic Video Understanding by Planning How You See in Videos
Keliang Li, Yansong Li, Hongze Shen, Mengdi Liu, Hong Chang, Shiguang Shan • 2026-03-25
Orientation Reconstruction of Proteins using Coulomb Explosions
Tomas André, Alfredo Bellisario, Nicusor Timneanu, Carl Caleman • 2026-03-25
The role of spatial context and multitask learning in the detection of organic and conventional farming systems based on Sentinel-2 time series
Jan Hemmerling, Marcel Schwieder, Philippe Rufin, Leon-Friedrich Thomas, Mire... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2602.20160v1</id>\n <title>tttLRM: Test-Time Training for Long Context and Autoregressive 3D Reconstruction</title>\n <updated>2026-02-23T18:59:45Z</updated>\n <link href='https://arxiv.org/abs/2602.20160v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2602.20160v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>We propose tttLRM, a novel large 3D reconstruction model that leverages a Test-Time Training (TTT) layer to enable long-context, autoregressive 3D reconstruction with linear computational complexity, further scaling the model's capability. Our framework efficiently compresses multiple image observations into the fast weights of the TTT layer, forming an implicit 3D representation in the latent space that can be decoded into various explicit formats, such as Gaussian Splats (GS) for downstream applications. The online learning variant of our model supports progressive 3D reconstruction and refinement from streaming observations. We demonstrate that pretraining on novel view synthesis tasks effectively transfers to explicit 3D modeling, resulting in improved reconstruction quality and faster convergence. Extensive experiments show that our method achieves superior performance in feedforward 3D Gaussian reconstruction compared to state-of-the-art approaches on both objects and scenes.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n <published>2026-02-23T18:59:45Z</published>\n <arxiv:comment>Accepted by CVPR 2026. Project Page: https://cwchenwang.github.io/tttLRM</arxiv:comment>\n <arxiv:primary_category term='cs.CV'/>\n <author>\n <name>Chen Wang</name>\n </author>\n <author>\n <name>Hao Tan</name>\n </author>\n <author>\n <name>Wang Yifan</name>\n </author>\n <author>\n <name>Zhiqin Chen</name>\n </author>\n <author>\n <name>Yuheng Liu</name>\n </author>\n <author>\n <name>Kalyan Sunkavalli</name>\n </author>\n <author>\n <name>Sai Bi</name>\n </author>\n <author>\n <name>Lingjie Liu</name>\n </author>\n <author>\n <name>Yiwei Hu</name>\n </author>\n </entry>"
}