Research

Paper

AI LLM March 16, 2026

Confusion-Aware In-Context-Learning for Vision-Language Models in Robotic Manipulation

Authors

Yayun He, Zuheng Kang, Botao Zhao, Zhouyin Wu, Junqing Peng, Jianzong Wang

Abstract

Vision-language models (VLMs) have significantly improved the generalization capabilities of robotic manipulation. However, VLM-based systems often suffer from a lack of robustness, leading to unpredictable errors, particularly in scenarios involving confusable objects. Our preliminary analysis reveals that these failures are mainly caused by shortcut learning problem inherently in VLMs, limiting their ability to accurately distinguish between confusable features. To this end, we propose Confusion-Aware In-Context Learning (CAICL), a method that enhances VLM performance in confusable scenarios for robotic manipulation. The approach begins with confusion localization and analysis, identifying potential sources of confusion. This information is then used as a prompt for the VLM to focus on features most likely to cause misidentification. Extensive experiments on the VIMA-Bench show that CAICL effectively addresses the shortcut learning issue, achieving a 85.5\% success rate and showing good stability across tasks with different degrees of generalization.

Metadata

arXiv ID: 2603.15134
Provider: ARXIV
Primary Category: cs.RO
Published: 2026-03-16
Fetched: 2026-03-17 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.15134v1</id>\n    <title>Confusion-Aware In-Context-Learning for Vision-Language Models in Robotic Manipulation</title>\n    <updated>2026-03-16T11:26:45Z</updated>\n    <link href='https://arxiv.org/abs/2603.15134v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.15134v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Vision-language models (VLMs) have significantly improved the generalization capabilities of robotic manipulation. However, VLM-based systems often suffer from a lack of robustness, leading to unpredictable errors, particularly in scenarios involving confusable objects. Our preliminary analysis reveals that these failures are mainly caused by shortcut learning problem inherently in VLMs, limiting their ability to accurately distinguish between confusable features. To this end, we propose Confusion-Aware In-Context Learning (CAICL), a method that enhances VLM performance in confusable scenarios for robotic manipulation. The approach begins with confusion localization and analysis, identifying potential sources of confusion. This information is then used as a prompt for the VLM to focus on features most likely to cause misidentification. Extensive experiments on the VIMA-Bench show that CAICL effectively addresses the shortcut learning issue, achieving a 85.5\\% success rate and showing good stability across tasks with different degrees of generalization.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.RO'/>\n    <published>2026-03-16T11:26:45Z</published>\n    <arxiv:comment>Accepted by the 29th International Conference on Computer Supported Cooperative Work in Design (CSCWD 2026)</arxiv:comment>\n    <arxiv:primary_category term='cs.RO'/>\n    <author>\n      <name>Yayun He</name>\n    </author>\n    <author>\n      <name>Zuheng Kang</name>\n    </author>\n    <author>\n      <name>Botao Zhao</name>\n    </author>\n    <author>\n      <name>Zhouyin Wu</name>\n    </author>\n    <author>\n      <name>Junqing Peng</name>\n    </author>\n    <author>\n      <name>Jianzong Wang</name>\n    </author>\n  </entry>"
}