Research

Paper

TESTING March 12, 2026

Linking Perception, Confidence and Accuracy in MLLMs

Authors

Yuetian Du, Yucheng Wang, Rongyu Zhang, Zhijie Xu, Boyu Yang, Ming Kong, Jie Liu, Qiang Zhu

Abstract

Recent advances in Multi-modal Large Language Models (MLLMs) have predominantly focused on enhancing visual perception to improve accuracy. However, a critical question remains unexplored: Do models know when they do not know? Through a probing experiment, we reveal a severe confidence miscalibration problem in MLLMs. To address this, we propose Confidence-Driven Reinforcement Learning (CDRL), which uses original-noise image pairs and a novel confidence-based reward to enhance perceptual sensitivity and robustly calibrate the model's confidence. Beyond training benefits, calibrated confidence enables more effective test-time scaling as a free lunch. We further propose Confidence-Aware Test-Time Scaling (CA-TTS), which dynamically coordinates Self-Consistency, Self-Reflection, and Visual Self-Check modules guided by confidence signals. An Expert Model acts in multiple roles (e.g., Planner, Critic, Voter) to schedule these modules and provide external verification. Our integrated framework establishes new state-of-the-art results with consistent 8.8% gains across four benchmarks. More ablation studies demonstrate the effectiveness of each module and scaling superiority.

Metadata

arXiv ID: 2603.12149
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-12
Fetched: 2026-03-13 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.12149v1</id>\n    <title>Linking Perception, Confidence and Accuracy in MLLMs</title>\n    <updated>2026-03-12T16:47:42Z</updated>\n    <link href='https://arxiv.org/abs/2603.12149v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.12149v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Recent advances in Multi-modal Large Language Models (MLLMs) have predominantly focused on enhancing visual perception to improve accuracy. However, a critical question remains unexplored: Do models know when they do not know? Through a probing experiment, we reveal a severe confidence miscalibration problem in MLLMs. To address this, we propose Confidence-Driven Reinforcement Learning (CDRL), which uses original-noise image pairs and a novel confidence-based reward to enhance perceptual sensitivity and robustly calibrate the model's confidence. Beyond training benefits, calibrated confidence enables more effective test-time scaling as a free lunch. We further propose Confidence-Aware Test-Time Scaling (CA-TTS), which dynamically coordinates Self-Consistency, Self-Reflection, and Visual Self-Check modules guided by confidence signals. An Expert Model acts in multiple roles (e.g., Planner, Critic, Voter) to schedule these modules and provide external verification. Our integrated framework establishes new state-of-the-art results with consistent 8.8% gains across four benchmarks. More ablation studies demonstrate the effectiveness of each module and scaling superiority.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-12T16:47:42Z</published>\n    <arxiv:comment>Accepted by CVPR2026</arxiv:comment>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Yuetian Du</name>\n    </author>\n    <author>\n      <name>Yucheng Wang</name>\n    </author>\n    <author>\n      <name>Rongyu Zhang</name>\n    </author>\n    <author>\n      <name>Zhijie Xu</name>\n    </author>\n    <author>\n      <name>Boyu Yang</name>\n    </author>\n    <author>\n      <name>Ming Kong</name>\n    </author>\n    <author>\n      <name>Jie Liu</name>\n    </author>\n    <author>\n      <name>Qiang Zhu</name>\n    </author>\n  </entry>"
}