Research

Paper

AI LLM February 24, 2026

VAUQ: Vision-Aware Uncertainty Quantification for LVLM Self-Evaluation

Authors

Seongheon Park, Changdae Oh, Hyeong Kyu Choi, Xuefeng Du, Sharon Li

Abstract

Large Vision-Language Models (LVLMs) frequently hallucinate, limiting their safe deployment in real-world applications. Existing LLM self-evaluation methods rely on a model's ability to estimate the correctness of its own outputs, which can improve deployment reliability; however, they depend heavily on language priors and are therefore ill-suited for evaluating vision-conditioned predictions. We propose VAUQ, a vision-aware uncertainty quantification framework for LVLM self-evaluation that explicitly measures how strongly a model's output depends on visual evidence. VAUQ introduces the Image-Information Score (IS), which captures the reduction in predictive uncertainty attributable to visual input, and an unsupervised core-region masking strategy that amplifies the influence of salient regions. Combining predictive entropy with this core-masked IS yields a training-free scoring function that reliably reflects answer correctness. Comprehensive experiments show that VAUQ consistently outperforms existing self-evaluation methods across multiple datasets.

Metadata

arXiv ID: 2602.21054
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-02-24
Fetched: 2026-02-25 06:05

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.21054v1</id>\n    <title>VAUQ: Vision-Aware Uncertainty Quantification for LVLM Self-Evaluation</title>\n    <updated>2026-02-24T16:11:14Z</updated>\n    <link href='https://arxiv.org/abs/2602.21054v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.21054v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large Vision-Language Models (LVLMs) frequently hallucinate, limiting their safe deployment in real-world applications. Existing LLM self-evaluation methods rely on a model's ability to estimate the correctness of its own outputs, which can improve deployment reliability; however, they depend heavily on language priors and are therefore ill-suited for evaluating vision-conditioned predictions. We propose VAUQ, a vision-aware uncertainty quantification framework for LVLM self-evaluation that explicitly measures how strongly a model's output depends on visual evidence. VAUQ introduces the Image-Information Score (IS), which captures the reduction in predictive uncertainty attributable to visual input, and an unsupervised core-region masking strategy that amplifies the influence of salient regions. Combining predictive entropy with this core-masked IS yields a training-free scoring function that reliably reflects answer correctness. Comprehensive experiments show that VAUQ consistently outperforms existing self-evaluation methods across multiple datasets.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-02-24T16:11:14Z</published>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Seongheon Park</name>\n    </author>\n    <author>\n      <name>Changdae Oh</name>\n    </author>\n    <author>\n      <name>Hyeong Kyu Choi</name>\n    </author>\n    <author>\n      <name>Xuefeng Du</name>\n    </author>\n    <author>\n      <name>Sharon Li</name>\n    </author>\n  </entry>"
}