Research

Paper

AI LLM March 17, 2026

Grounding the Score: Explicit Visual Premise Verification for Reliable Vision-Language Process Reward Models

Authors

Junxin Wang, Dai Guan, Weijie Qiu, Zhihang Li, Yongbo Gai, Zhengyi Yang, Mengyu Zhou, Erchao Zhao, Xiaoxi Jiang, Guanjun Jiang

Abstract

Vision-language process reward models (VL-PRMs) are increasingly used to score intermediate reasoning steps and rerank candidates under test-time scaling. However, they often function as black-box judges: a low step score may reflect a genuine reasoning mistake or simply the verifier's misperception of the image. This entanglement between perception and reasoning leads to systematic false positives (rewarding hallucinated visual premises) and false negatives (penalizing correct grounded statements), undermining both reranking and error localization. We introduce Explicit Visual Premise Verification (EVPV), a lightweight verification interface that conditions step scoring on the reliability of the visual premises a step depends on. The policy is prompted to produce a step-wise visual checklist that makes required visual facts explicit, while a constraint extractor independently derives structured visual constraints from the input image. EVPV matches checklist claims against these constraints to compute a scalar visual reliability signal, and calibrates PRM step rewards via reliability gating: rewards for visually dependent steps are attenuated when reliability is low and preserved when reliability is high. This decouples perceptual uncertainty from logical evaluation without per-step tool calls. Experiments on VisualProcessBench and six multimodal reasoning benchmarks show that EVPV improves step-level verification and consistently boosts Best-of-N reranking accuracy over strong baselines. Furthermore, injecting controlled corruption into the extracted constraints produces monotonic performance degradation, providing causal evidence that the gains arise from constraint fidelity and explicit premise verification rather than incidental prompt effects. Code is available at: https://github.com/Qwen-Applications/EVPV-PRM

Metadata

arXiv ID: 2603.16253
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-17
Fetched: 2026-03-18 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.16253v1</id>\n    <title>Grounding the Score: Explicit Visual Premise Verification for Reliable Vision-Language Process Reward Models</title>\n    <updated>2026-03-17T08:40:26Z</updated>\n    <link href='https://arxiv.org/abs/2603.16253v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.16253v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Vision-language process reward models (VL-PRMs) are increasingly used to score intermediate reasoning steps and rerank candidates under test-time scaling. However, they often function as black-box judges: a low step score may reflect a genuine reasoning mistake or simply the verifier's misperception of the image. This entanglement between perception and reasoning leads to systematic false positives (rewarding hallucinated visual premises) and false negatives (penalizing correct grounded statements), undermining both reranking and error localization. We introduce Explicit Visual Premise Verification (EVPV), a lightweight verification interface that conditions step scoring on the reliability of the visual premises a step depends on. The policy is prompted to produce a step-wise visual checklist that makes required visual facts explicit, while a constraint extractor independently derives structured visual constraints from the input image. EVPV matches checklist claims against these constraints to compute a scalar visual reliability signal, and calibrates PRM step rewards via reliability gating: rewards for visually dependent steps are attenuated when reliability is low and preserved when reliability is high. This decouples perceptual uncertainty from logical evaluation without per-step tool calls. Experiments on VisualProcessBench and six multimodal reasoning benchmarks show that EVPV improves step-level verification and consistently boosts Best-of-N reranking accuracy over strong baselines. Furthermore, injecting controlled corruption into the extracted constraints produces monotonic performance degradation, providing causal evidence that the gains arise from constraint fidelity and explicit premise verification rather than incidental prompt effects. Code is available at: https://github.com/Qwen-Applications/EVPV-PRM</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-17T08:40:26Z</published>\n    <arxiv:comment>27 pages, 4 figures, 10 tables. Evaluated on VisualProcessBench and six multimodal reasoning benchmarks (LogicVista, MMMU, MathVerse-VO, MathVision, MathVista, WeMath). Includes ablations and causal analysis via controlled constraint corruption. Code: https://github.com/Qwen-Applications/EVPV-PRM</arxiv:comment>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Junxin Wang</name>\n    </author>\n    <author>\n      <name>Dai Guan</name>\n    </author>\n    <author>\n      <name>Weijie Qiu</name>\n    </author>\n    <author>\n      <name>Zhihang Li</name>\n    </author>\n    <author>\n      <name>Yongbo Gai</name>\n    </author>\n    <author>\n      <name>Zhengyi Yang</name>\n    </author>\n    <author>\n      <name>Mengyu Zhou</name>\n    </author>\n    <author>\n      <name>Erchao Zhao</name>\n    </author>\n    <author>\n      <name>Xiaoxi Jiang</name>\n    </author>\n    <author>\n      <name>Guanjun Jiang</name>\n    </author>\n  </entry>"
}