Research

Paper

TESTING March 18, 2026

Towards Motion-aware Referring Image Segmentation

Authors

Chaeyun Kim, Seunghoon Yi, Yejin Kim, Yohan Jo, Joonseok Lee

Abstract

Referring Image Segmentation (RIS) requires identifying objects from images based on textual descriptions. We observe that existing methods significantly underperform on motion-related queries compared to appearance-based ones. To address this, we first introduce an efficient data augmentation scheme that extracts motion-centric phrases from original captions, exposing models to more motion expressions without additional annotations. Second, since the same object can be described differently depending on the context, we propose Multimodal Radial Contrastive Learning (MRaCL), performed on fused image-text embeddings rather than unimodal representations. For comprehensive evaluation, we introduce a new test split focusing on motion-centric queries, and introduce a new benchmark called M-Bench, where objects are distinguished primarily by actions. Extensive experiments show our method substantially improves performance on motion-centric queries across multiple RIS models, maintaining competitive results on appearance-based descriptions. Codes are available at https://github.com/snuviplab/MRaCL

Metadata

arXiv ID: 2603.17413
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-18
Fetched: 2026-03-19 06:01

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.17413v1</id>\n    <title>Towards Motion-aware Referring Image Segmentation</title>\n    <updated>2026-03-18T06:45:59Z</updated>\n    <link href='https://arxiv.org/abs/2603.17413v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.17413v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Referring Image Segmentation (RIS) requires identifying objects from images based on textual descriptions. We observe that existing methods significantly underperform on motion-related queries compared to appearance-based ones. To address this, we first introduce an efficient data augmentation scheme that extracts motion-centric phrases from original captions, exposing models to more motion expressions without additional annotations. Second, since the same object can be described differently depending on the context, we propose Multimodal Radial Contrastive Learning (MRaCL), performed on fused image-text embeddings rather than unimodal representations. For comprehensive evaluation, we introduce a new test split focusing on motion-centric queries, and introduce a new benchmark called M-Bench, where objects are distinguished primarily by actions. Extensive experiments show our method substantially improves performance on motion-centric queries across multiple RIS models, maintaining competitive results on appearance-based descriptions. Codes are available at https://github.com/snuviplab/MRaCL</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <published>2026-03-18T06:45:59Z</published>\n    <arxiv:comment>Accepted at AISTATS 2026. * Equal contribution</arxiv:comment>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Chaeyun Kim</name>\n    </author>\n    <author>\n      <name>Seunghoon Yi</name>\n    </author>\n    <author>\n      <name>Yejin Kim</name>\n    </author>\n    <author>\n      <name>Yohan Jo</name>\n    </author>\n    <author>\n      <name>Joonseok Lee</name>\n    </author>\n  </entry>"
}