Research

Paper

TESTING February 25, 2026

Geometry-as-context: Modulating Explicit 3D in Scene-consistent Video Generation to Geometry Context

Authors

JiaKui Hu, Jialun Liu, Liying Yang, Xinliang Zhang, Kaiwen Li, Shuang Zeng, Yuanwei Li, Haibin Huang, Chi Zhang, Yanye Lu

Abstract

Scene-consistent video generation aims to create videos that explore 3D scenes based on a camera trajectory. Previous methods rely on video generation models with external memory for consistency, or iterative 3D reconstruction and inpainting, which accumulate errors during inference due to incorrect intermediary outputs, non-differentiable processes, and separate models. To overcome these limitations, we introduce ``geometry-as-context". It iteratively completes the following steps using an autoregressive camera-controlled video generation model: (1) estimates the geometry of the current view necessary for 3D reconstruction, and (2) simulates and restores novel view images rendered by the 3D scene. Under this multi-task framework, we develop the camera gated attention module to enhance the model's capability to effectively leverage camera poses. During the training phase, text contexts are utilized to ascertain whether geometric or RGB images should be generated. To ensure that the model can generate RGB-only outputs during inference, the geometry context is randomly dropped from the interleaved text-image-geometry training sequence. The method has been tested on scene video generation with one-direction and forth-and-back trajectories. The results show its superiority over previous approaches in maintaining scene consistency and camera control.

Metadata

arXiv ID: 2602.21929
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-02-25
Fetched: 2026-02-26 05:00

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.21929v1</id>\n    <title>Geometry-as-context: Modulating Explicit 3D in Scene-consistent Video Generation to Geometry Context</title>\n    <updated>2026-02-25T14:09:03Z</updated>\n    <link href='https://arxiv.org/abs/2602.21929v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.21929v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Scene-consistent video generation aims to create videos that explore 3D scenes based on a camera trajectory. Previous methods rely on video generation models with external memory for consistency, or iterative 3D reconstruction and inpainting, which accumulate errors during inference due to incorrect intermediary outputs, non-differentiable processes, and separate models. To overcome these limitations, we introduce ``geometry-as-context\". It iteratively completes the following steps using an autoregressive camera-controlled video generation model: (1) estimates the geometry of the current view necessary for 3D reconstruction, and (2) simulates and restores novel view images rendered by the 3D scene. Under this multi-task framework, we develop the camera gated attention module to enhance the model's capability to effectively leverage camera poses. During the training phase, text contexts are utilized to ascertain whether geometric or RGB images should be generated. To ensure that the model can generate RGB-only outputs during inference, the geometry context is randomly dropped from the interleaved text-image-geometry training sequence. The method has been tested on scene video generation with one-direction and forth-and-back trajectories. The results show its superiority over previous approaches in maintaining scene consistency and camera control.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <published>2026-02-25T14:09:03Z</published>\n    <arxiv:comment>Accepted by CVPR 2026</arxiv:comment>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>JiaKui Hu</name>\n    </author>\n    <author>\n      <name>Jialun Liu</name>\n    </author>\n    <author>\n      <name>Liying Yang</name>\n    </author>\n    <author>\n      <name>Xinliang Zhang</name>\n    </author>\n    <author>\n      <name>Kaiwen Li</name>\n    </author>\n    <author>\n      <name>Shuang Zeng</name>\n    </author>\n    <author>\n      <name>Yuanwei Li</name>\n    </author>\n    <author>\n      <name>Haibin Huang</name>\n    </author>\n    <author>\n      <name>Chi Zhang</name>\n    </author>\n    <author>\n      <name>Yanye Lu</name>\n    </author>\n  </entry>"
}