Research

Paper

AI LLM March 16, 2026

ForceVLA2: Unleashing Hybrid Force-Position Control with Force Awareness for Contact-Rich Manipulation

Authors

Yang Li, Zhaxizhuoma, Hongru Jiang, Junjie Xia, Hongquan Zhang, Jinda Du, Yunsong Zhou, Jia Zeng, Ce Hao, Jieji Ren, Qiaojun Yu, Cewu Lu, Yu Qiao, Jiangmiao Pang

Abstract

Embodied intelligence for contact-rich manipulation has predominantly relied on position control, while explicit awareness and regulation of interaction forces remain under-explored, limiting stability, precision, and robustness in real-world tasks. We propose ForceVLA2, an end-to-end vision-language-action framework that equips robots with hybrid force-position control and explicit force awareness. ForceVLA2 introduces force-based prompts into the VLM expert to construct force-aware task concepts across stages, and employs a Cross-Scale Mixture-of-Experts (MoE) in the action expert to adaptively fuse these concepts with real-time interaction forces for closed-loop hybrid force-position regulation. To support learning and evaluation, we construct ForceVLA2-Dataset, containing 1,000 trajectories over 5 contact-rich tasks, including wiping, pressing, and assembling, with multi-view images, task prompts, proprioceptive state, and force signals. Extensive experiments show that ForceVLA2 substantially improves success rates and reliability in contact-rich manipulation, outperforming pi0 and pi0.5 by 48.0% and 35.0%, respectively, across the 5 tasks, and mitigating common failure modes such as arm overload and unstable contact, thereby actively advancing force-aware interactive physical intelligence in VLAs. The project page is available at https://sites.google.com/view/force-vla2/home.

Metadata

arXiv ID: 2603.15169
Provider: ARXIV
Primary Category: cs.RO
Published: 2026-03-16
Fetched: 2026-03-17 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.15169v1</id>\n    <title>ForceVLA2: Unleashing Hybrid Force-Position Control with Force Awareness for Contact-Rich Manipulation</title>\n    <updated>2026-03-16T12:03:58Z</updated>\n    <link href='https://arxiv.org/abs/2603.15169v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.15169v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Embodied intelligence for contact-rich manipulation has predominantly relied on position control, while explicit awareness and regulation of interaction forces remain under-explored, limiting stability, precision, and robustness in real-world tasks. We propose ForceVLA2, an end-to-end vision-language-action framework that equips robots with hybrid force-position control and explicit force awareness. ForceVLA2 introduces force-based prompts into the VLM expert to construct force-aware task concepts across stages, and employs a Cross-Scale Mixture-of-Experts (MoE) in the action expert to adaptively fuse these concepts with real-time interaction forces for closed-loop hybrid force-position regulation. To support learning and evaluation, we construct ForceVLA2-Dataset, containing 1,000 trajectories over 5 contact-rich tasks, including wiping, pressing, and assembling, with multi-view images, task prompts, proprioceptive state, and force signals. Extensive experiments show that ForceVLA2 substantially improves success rates and reliability in contact-rich manipulation, outperforming pi0 and pi0.5 by 48.0% and 35.0%, respectively, across the 5 tasks, and mitigating common failure modes such as arm overload and unstable contact, thereby actively advancing force-aware interactive physical intelligence in VLAs. The project page is available at https://sites.google.com/view/force-vla2/home.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.RO'/>\n    <published>2026-03-16T12:03:58Z</published>\n    <arxiv:comment>Accepted by CVPR 2026</arxiv:comment>\n    <arxiv:primary_category term='cs.RO'/>\n    <author>\n      <name>Yang Li</name>\n    </author>\n    <author>\n      <name> Zhaxizhuoma</name>\n    </author>\n    <author>\n      <name>Hongru Jiang</name>\n    </author>\n    <author>\n      <name>Junjie Xia</name>\n    </author>\n    <author>\n      <name>Hongquan Zhang</name>\n    </author>\n    <author>\n      <name>Jinda Du</name>\n    </author>\n    <author>\n      <name>Yunsong Zhou</name>\n    </author>\n    <author>\n      <name>Jia Zeng</name>\n    </author>\n    <author>\n      <name>Ce Hao</name>\n    </author>\n    <author>\n      <name>Jieji Ren</name>\n    </author>\n    <author>\n      <name>Qiaojun Yu</name>\n    </author>\n    <author>\n      <name>Cewu Lu</name>\n    </author>\n    <author>\n      <name>Yu Qiao</name>\n    </author>\n    <author>\n      <name>Jiangmiao Pang</name>\n    </author>\n  </entry>"
}