Research

Paper

AI LLM February 23, 2026

Vinedresser3D: Agentic Text-guided 3D Editing

Authors

Yankuan Chi, Xiang Li, Zixuan Huang, James M. Rehg

Abstract

Text-guided 3D editing aims to modify existing 3D assets using natural-language instructions. Current methods struggle to jointly understand complex prompts, automatically localize edits in 3D, and preserve unedited content. We introduce Vinedresser3D, an agentic framework for high-quality text-guided 3D editing that operates directly in the latent space of a native 3D generative model. Given a 3D asset and an editing prompt, Vinedresser3D uses a multimodal large language model to infer rich descriptions of the original asset, identify the edit region and edit type (addition, modification, deletion), and generate decomposed structural and appearance-level text guidance. The agent then selects an informative view and applies an image editing model to obtain visual guidance. Finally, an inversion-based rectified-flow inpainting pipeline with an interleaved sampling module performs editing in the 3D latent space, enforcing prompt alignment while maintaining 3D coherence and unedited regions. Experiments on diverse 3D edits demonstrate that Vinedresser3D outperforms prior baselines in both automatic metrics and human preference studies, while enabling precise, coherent, and mask-free 3D editing.

Metadata

arXiv ID: 2602.19542
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-02-23
Fetched: 2026-02-24 04:38

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.19542v1</id>\n    <title>Vinedresser3D: Agentic Text-guided 3D Editing</title>\n    <updated>2026-02-23T06:30:36Z</updated>\n    <link href='https://arxiv.org/abs/2602.19542v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.19542v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Text-guided 3D editing aims to modify existing 3D assets using natural-language instructions. Current methods struggle to jointly understand complex prompts, automatically localize edits in 3D, and preserve unedited content. We introduce Vinedresser3D, an agentic framework for high-quality text-guided 3D editing that operates directly in the latent space of a native 3D generative model. Given a 3D asset and an editing prompt, Vinedresser3D uses a multimodal large language model to infer rich descriptions of the original asset, identify the edit region and edit type (addition, modification, deletion), and generate decomposed structural and appearance-level text guidance. The agent then selects an informative view and applies an image editing model to obtain visual guidance. Finally, an inversion-based rectified-flow inpainting pipeline with an interleaved sampling module performs editing in the 3D latent space, enforcing prompt alignment while maintaining 3D coherence and unedited regions. Experiments on diverse 3D edits demonstrate that Vinedresser3D outperforms prior baselines in both automatic metrics and human preference studies, while enabling precise, coherent, and mask-free 3D editing.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <published>2026-02-23T06:30:36Z</published>\n    <arxiv:comment>CVPR 2026, Project website:https://vinedresser3d.github.io/</arxiv:comment>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Yankuan Chi</name>\n    </author>\n    <author>\n      <name>Xiang Li</name>\n    </author>\n    <author>\n      <name>Zixuan Huang</name>\n    </author>\n    <author>\n      <name>James M. Rehg</name>\n    </author>\n  </entry>"
}