Research

Paper

AI LLM March 20, 2026

ParallelVLM: Lossless Video-LLM Acceleration with Visual Alignment Aware Parallel Speculative Decoding

Authors

Quan Kong, Yuhao Shen, Yicheng Ji, Huan Li, Cong Wang

Abstract

Although current Video-LLMs achieve impressive performance in video understanding tasks, their autoregressive decoding efficiency remains constrained by the massive number of video tokens. Visual token pruning can partially ease this bottleneck, yet existing approaches still suffer from information loss and yield only modest acceleration in decoding. In this paper, we propose ParallelVLM, a training-free draft-then-verify speculative decoding framework that overcomes both mutual waiting and limited speedup-ratio problems between draft and target models in long-video settings. ParallelVLM features two parallelized stages that maximize hardware utilization and incorporate an Unbiased Verifier-Guided Pruning strategy to better align the draft and target models by eliminating the positional bias in attention-guided pruning. Extensive experiments demonstrate that ParallelVLM effectively expands the draft window by $1.6\sim1.8\times$ with high accepted lengths, and accelerates various video understanding benchmarks by 3.36$\times$ on LLaVA-Onevision-72B and 2.42$\times$ on Qwen2.5-VL-32B compared with vanilla autoregressive decoding.

Metadata

arXiv ID: 2603.19610
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-20
Fetched: 2026-03-23 16:54

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.19610v1</id>\n    <title>ParallelVLM: Lossless Video-LLM Acceleration with Visual Alignment Aware Parallel Speculative Decoding</title>\n    <updated>2026-03-20T03:30:32Z</updated>\n    <link href='https://arxiv.org/abs/2603.19610v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.19610v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Although current Video-LLMs achieve impressive performance in video understanding tasks, their autoregressive decoding efficiency remains constrained by the massive number of video tokens. Visual token pruning can partially ease this bottleneck, yet existing approaches still suffer from information loss and yield only modest acceleration in decoding. In this paper, we propose ParallelVLM, a training-free draft-then-verify speculative decoding framework that overcomes both mutual waiting and limited speedup-ratio problems between draft and target models in long-video settings. ParallelVLM features two parallelized stages that maximize hardware utilization and incorporate an Unbiased Verifier-Guided Pruning strategy to better align the draft and target models by eliminating the positional bias in attention-guided pruning. Extensive experiments demonstrate that ParallelVLM effectively expands the draft window by $1.6\\sim1.8\\times$ with high accepted lengths, and accelerates various video understanding benchmarks by 3.36$\\times$ on LLaVA-Onevision-72B and 2.42$\\times$ on Qwen2.5-VL-32B compared with vanilla autoregressive decoding.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <published>2026-03-20T03:30:32Z</published>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Quan Kong</name>\n    </author>\n    <author>\n      <name>Yuhao Shen</name>\n    </author>\n    <author>\n      <name>Yicheng Ji</name>\n    </author>\n    <author>\n      <name>Huan Li</name>\n    </author>\n    <author>\n      <name>Cong Wang</name>\n    </author>\n  </entry>"
}