Research

Paper

AI LLM March 04, 2026

GeoSeg: Training-Free Reasoning-Driven Segmentation in Remote Sensing Imagery

Authors

Lifan Jiang, Yuhang Pei, oxi Wu, Yan Zhao, Tianrun Wu, Shulong Yu, Lihui Zhang, Deng Cai

Abstract

Recent advances in MLLMs are reframing segmentation from fixed-category prediction to instruction-grounded localization. While reasoning based segmentation has progressed rapidly in natural scenes, remote sensing lacks a generalizable solution due to the prohibitive cost of reasoning-oriented data and domain-specific challenges like overhead viewpoints. We present GeoSeg, a zero-shot, training-free framework that bypasses the supervision bottleneck for reasoning-driven remote sensing segmentation. GeoSeg couples MLLM reasoning with precise localization via: (i) bias-aware coordinate refinement to correct systematic grounding shifts and (ii) a dual-route prompting mechanism to fuse semantic intent with fine-grained spatial cues. We also introduce GeoSeg-Bench, a diagnostic benchmark of 810 image--query pairs with hierarchical difficulty levels. Experiments show that GeoSeg consistently outperforms all baselines, with extensive ablations confirming the effectiveness and necessity of each component.

Metadata

arXiv ID: 2603.03983
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-04
Fetched: 2026-03-05 06:06

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.03983v1</id>\n    <title>GeoSeg: Training-Free Reasoning-Driven Segmentation in Remote Sensing Imagery</title>\n    <updated>2026-03-04T12:24:16Z</updated>\n    <link href='https://arxiv.org/abs/2603.03983v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.03983v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Recent advances in MLLMs are reframing segmentation from fixed-category prediction to instruction-grounded localization. While reasoning based segmentation has progressed rapidly in natural scenes, remote sensing lacks a generalizable solution due to the prohibitive cost of reasoning-oriented data and domain-specific challenges like overhead viewpoints. We present GeoSeg, a zero-shot, training-free framework that bypasses the supervision bottleneck for reasoning-driven remote sensing segmentation. GeoSeg couples MLLM reasoning with precise localization via: (i) bias-aware coordinate refinement to correct systematic grounding shifts and (ii) a dual-route prompting mechanism to fuse semantic intent with fine-grained spatial cues. We also introduce GeoSeg-Bench, a diagnostic benchmark of 810 image--query pairs with hierarchical difficulty levels. Experiments show that GeoSeg consistently outperforms all baselines, with extensive ablations confirming the effectiveness and necessity of each component.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-04T12:24:16Z</published>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Lifan Jiang</name>\n    </author>\n    <author>\n      <name>Yuhang Pei</name>\n    </author>\n    <author>\n      <name>oxi Wu</name>\n    </author>\n    <author>\n      <name>Yan Zhao</name>\n    </author>\n    <author>\n      <name>Tianrun Wu</name>\n    </author>\n    <author>\n      <name>Shulong Yu</name>\n    </author>\n    <author>\n      <name>Lihui Zhang</name>\n    </author>\n    <author>\n      <name>Deng Cai</name>\n    </author>\n  </entry>"
}