Research

Paper

AI LLM March 25, 2026

Language-Assisted Image Clustering Guided by Discriminative Relational Signals and Adaptive Semantic Centers

Authors

Jun Ma, Xu Zhang, Zhengxing Jiao, Yaxin Hou, Hui Liu, Junhui Hou, Yuheng Jia

Abstract

Language-Assisted Image Clustering (LAIC) augments the input images with additional texts with the help of vision-language models (VLMs) to promote clustering performance. Despite recent progress, existing LAIC methods often overlook two issues: (i) textual features constructed for each image are highly similar, leading to weak inter-class discriminability; (ii) the clustering step is restricted to pre-built image-text alignments, limiting the potential for better utilization of the text modality. To address these issues, we propose a new LAIC framework with two complementary components. First, we exploit cross-modal relations to produce more discriminative self-supervision signals for clustering, as it compatible with most VLMs training mechanisms. Second, we learn category-wise continuous semantic centers via prompt learning to produce the final clustering assignments. Extensive experiments on eight benchmark datasets demonstrate that our method achieves an average improvement of 2.6% over state-of-the-art methods, and the learned semantic centers exhibit strong interpretability. Code is available in the supplementary material.

Metadata

arXiv ID: 2603.24275
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-25
Fetched: 2026-03-26 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.24275v1</id>\n    <title>Language-Assisted Image Clustering Guided by Discriminative Relational Signals and Adaptive Semantic Centers</title>\n    <updated>2026-03-25T13:08:14Z</updated>\n    <link href='https://arxiv.org/abs/2603.24275v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.24275v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Language-Assisted Image Clustering (LAIC) augments the input images with additional texts with the help of vision-language models (VLMs) to promote clustering performance. Despite recent progress, existing LAIC methods often overlook two issues: (i) textual features constructed for each image are highly similar, leading to weak inter-class discriminability; (ii) the clustering step is restricted to pre-built image-text alignments, limiting the potential for better utilization of the text modality. To address these issues, we propose a new LAIC framework with two complementary components. First, we exploit cross-modal relations to produce more discriminative self-supervision signals for clustering, as it compatible with most VLMs training mechanisms. Second, we learn category-wise continuous semantic centers via prompt learning to produce the final clustering assignments. Extensive experiments on eight benchmark datasets demonstrate that our method achieves an average improvement of 2.6% over state-of-the-art methods, and the learned semantic centers exhibit strong interpretability. Code is available in the supplementary material.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-03-25T13:08:14Z</published>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Jun Ma</name>\n    </author>\n    <author>\n      <name>Xu Zhang</name>\n    </author>\n    <author>\n      <name>Zhengxing Jiao</name>\n    </author>\n    <author>\n      <name>Yaxin Hou</name>\n    </author>\n    <author>\n      <name>Hui Liu</name>\n    </author>\n    <author>\n      <name>Junhui Hou</name>\n    </author>\n    <author>\n      <name>Yuheng Jia</name>\n    </author>\n  </entry>"
}