Research

Paper

AI LLM March 23, 2026

VP-VLA: Visual Prompting as an Interface for Vision-Language-Action Models

Authors

Zixuan Wang, Yuxin Chen, Yuqi Liu, Jinhui Ye, Pengguang Chen, Changsheng Lu, Shu Liu, Jiaya Jia

Abstract

Vision-Language-Action (VLA) models typically map visual observations and linguistic instructions directly to robotic control signals. This "black-box" mapping forces a single forward pass to simultaneously handle instruction interpretation, spatial grounding, and low-level control, often leading to poor spatial precision and limited robustness in out-of-distribution scenarios. To address these limitations, we propose VP-VLA, a dual-system framework that decouples high-level reasoning and low-level execution via a structured visual prompting interface. Specifically, a "System 2 Planner" decomposes complex instructions into sub-tasks and identifies relevant target objects and goal locations. These spatial anchors are then overlaid directly onto visual observations as structured visual prompts, such as crosshairs and bounding boxes. Guided by these prompts and enhanced by a novel auxiliary visual grounding objective during training, a "System 1 Controller" reliably generates precise low-level execution motions. Experiments on the Robocasa-GR1-Tabletop benchmark and SimplerEnv simulation demonstrate that VP-VLA improves success rates by 5% and 8.3%, surpassing competitive baselines including QwenOFT and GR00T-N1.6.

Metadata

arXiv ID: 2603.22003
Provider: ARXIV
Primary Category: cs.RO
Published: 2026-03-23
Fetched: 2026-03-24 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.22003v1</id>\n    <title>VP-VLA: Visual Prompting as an Interface for Vision-Language-Action Models</title>\n    <updated>2026-03-23T14:08:58Z</updated>\n    <link href='https://arxiv.org/abs/2603.22003v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.22003v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Vision-Language-Action (VLA) models typically map visual observations and linguistic instructions directly to robotic control signals. This \"black-box\" mapping forces a single forward pass to simultaneously handle instruction interpretation, spatial grounding, and low-level control, often leading to poor spatial precision and limited robustness in out-of-distribution scenarios. To address these limitations, we propose VP-VLA, a dual-system framework that decouples high-level reasoning and low-level execution via a structured visual prompting interface. Specifically, a \"System 2 Planner\" decomposes complex instructions into sub-tasks and identifies relevant target objects and goal locations. These spatial anchors are then overlaid directly onto visual observations as structured visual prompts, such as crosshairs and bounding boxes. Guided by these prompts and enhanced by a novel auxiliary visual grounding objective during training, a \"System 1 Controller\" reliably generates precise low-level execution motions. Experiments on the Robocasa-GR1-Tabletop benchmark and SimplerEnv simulation demonstrate that VP-VLA improves success rates by 5% and 8.3%, surpassing competitive baselines including QwenOFT and GR00T-N1.6.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.RO'/>\n    <published>2026-03-23T14:08:58Z</published>\n    <arxiv:comment>Project page: https://visualprompt-vla.github.io/</arxiv:comment>\n    <arxiv:primary_category term='cs.RO'/>\n    <author>\n      <name>Zixuan Wang</name>\n    </author>\n    <author>\n      <name>Yuxin Chen</name>\n    </author>\n    <author>\n      <name>Yuqi Liu</name>\n    </author>\n    <author>\n      <name>Jinhui Ye</name>\n    </author>\n    <author>\n      <name>Pengguang Chen</name>\n    </author>\n    <author>\n      <name>Changsheng Lu</name>\n    </author>\n    <author>\n      <name>Shu Liu</name>\n    </author>\n    <author>\n      <name>Jiaya Jia</name>\n    </author>\n  </entry>"
}