Research

Paper

AI LLM March 24, 2026

3DCity-LLM: Empowering Multi-modality Large Language Models for 3D City-scale Perception and Understanding

Authors

Yiping Chen, Jinpeng Li, Wenyu Ke, Yang Luo, Jie Ouyang, Zhongjie He, Li Liu, Hongchao Fan, Hao Wu

Abstract

While multi-modality large language models excel in object-centric or indoor scenarios, scaling them to 3D city-scale environments remains a formidable challenge. To bridge this gap, we propose 3DCity-LLM, a unified framework designed for 3D city-scale vision-language perception and understanding. 3DCity-LLM employs a coarse-to-fine feature encoding strategy comprising three parallel branches for target object, inter-object relationship, and global scene. To facilitate large-scale training, we introduce 3DCity-LLM-1.2M dataset that comprises approximately 1.2 million high-quality samples across seven representative task categories, ranging from fine-grained object analysis to multi-faceted scene planning. This strictly quality-controlled dataset integrates explicit 3D numerical information and diverse user-oriented simulations, enriching the question-answering diversity and realism of urban scenarios. Furthermore, we apply a multi-dimensional protocol based on text-similarity metrics and LLM-based semantic assessment to ensure faithful and comprehensive evaluations for all methods. Extensive experiments on two benchmarks demonstrate that 3DCity-LLM significantly outperforms existing state-of-the-art methods, offering a promising and meaningful direction for advancing spatial reasoning and urban intelligence. The source code and dataset are available at https://github.com/SYSU-3DSTAILab/3D-City-LLM.

Metadata

arXiv ID: 2603.23447
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-24
Fetched: 2026-03-25 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.23447v1</id>\n    <title>3DCity-LLM: Empowering Multi-modality Large Language Models for 3D City-scale Perception and Understanding</title>\n    <updated>2026-03-24T17:18:44Z</updated>\n    <link href='https://arxiv.org/abs/2603.23447v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.23447v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>While multi-modality large language models excel in object-centric or indoor scenarios, scaling them to 3D city-scale environments remains a formidable challenge. To bridge this gap, we propose 3DCity-LLM, a unified framework designed for 3D city-scale vision-language perception and understanding. 3DCity-LLM employs a coarse-to-fine feature encoding strategy comprising three parallel branches for target object, inter-object relationship, and global scene. To facilitate large-scale training, we introduce 3DCity-LLM-1.2M dataset that comprises approximately 1.2 million high-quality samples across seven representative task categories, ranging from fine-grained object analysis to multi-faceted scene planning. This strictly quality-controlled dataset integrates explicit 3D numerical information and diverse user-oriented simulations, enriching the question-answering diversity and realism of urban scenarios. Furthermore, we apply a multi-dimensional protocol based on text-similarity metrics and LLM-based semantic assessment to ensure faithful and comprehensive evaluations for all methods. Extensive experiments on two benchmarks demonstrate that 3DCity-LLM significantly outperforms existing state-of-the-art methods, offering a promising and meaningful direction for advancing spatial reasoning and urban intelligence. The source code and dataset are available at https://github.com/SYSU-3DSTAILab/3D-City-LLM.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-24T17:18:44Z</published>\n    <arxiv:comment>24 pages, 11 figures, 12 tables</arxiv:comment>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Yiping Chen</name>\n    </author>\n    <author>\n      <name>Jinpeng Li</name>\n    </author>\n    <author>\n      <name>Wenyu Ke</name>\n    </author>\n    <author>\n      <name>Yang Luo</name>\n    </author>\n    <author>\n      <name>Jie Ouyang</name>\n    </author>\n    <author>\n      <name>Zhongjie He</name>\n    </author>\n    <author>\n      <name>Li Liu</name>\n    </author>\n    <author>\n      <name>Hongchao Fan</name>\n    </author>\n    <author>\n      <name>Hao Wu</name>\n    </author>\n  </entry>"
}