Research

Paper

TESTING March 17, 2026

Kestrel: Grounding Self-Refinement for LVLM Hallucination Mitigation

Authors

Jiawei Mao, Hardy Chen, Haoqin Tu, Yuhan Wang, Letian Zhang, Zeyu Zheng, Huaxiu Yao, Zirui Wang, Cihang Xie, Yuyin Zhou

Abstract

Large vision-language models (LVLMs) have become increasingly strong but remain prone to hallucinations in multimodal tasks, which significantly narrows their deployment. As training these LVLMs to avoid hallucinations becomes prohibitively expensive for larger models, training-free methods offer a cheap and flexible solution to this problem, yet existing approaches based on decoding or tool use often bring limited gains and/or weak interpretability. We propose Kestrel, a training-free framework for LVLM hallucination mitigation that combines an explicit visual-grounding agent with evidence-verified self-refinement mechanism. In detail, Kestrel first collects explicit visual evidence and converts tool outputs into reusable and structured textual evidence. Second, to take full advantage of these evidence, Kestrel verifies them via an LVLM judge for evidence checking, then iteratively self-refine answers based on verified evidence to reduce the risk of over-correction. Extensive experiments show that Kestrel improves performance over strong baselines across hallucination benchmarks (e.g., average +3.31% on POPE and +28.34 on MME-Hallucination with Qwen3-VL), while providing transparent verification traces for hallucination diagnosis and analysis -- e.g., both the integrated self-refinement module and grounding agent contributing an average +2.0% gain on POPE.

Metadata

arXiv ID: 2603.16664
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-17
Fetched: 2026-03-18 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.16664v1</id>\n    <title>Kestrel: Grounding Self-Refinement for LVLM Hallucination Mitigation</title>\n    <updated>2026-03-17T15:30:47Z</updated>\n    <link href='https://arxiv.org/abs/2603.16664v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.16664v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large vision-language models (LVLMs) have become increasingly strong but remain prone to hallucinations in multimodal tasks, which significantly narrows their deployment. As training these LVLMs to avoid hallucinations becomes prohibitively expensive for larger models, training-free methods offer a cheap and flexible solution to this problem, yet existing approaches based on decoding or tool use often bring limited gains and/or weak interpretability. We propose Kestrel, a training-free framework for LVLM hallucination mitigation that combines an explicit visual-grounding agent with evidence-verified self-refinement mechanism. In detail, Kestrel first collects explicit visual evidence and converts tool outputs into reusable and structured textual evidence. Second, to take full advantage of these evidence, Kestrel verifies them via an LVLM judge for evidence checking, then iteratively self-refine answers based on verified evidence to reduce the risk of over-correction. Extensive experiments show that Kestrel improves performance over strong baselines across hallucination benchmarks (e.g., average +3.31% on POPE and +28.34 on MME-Hallucination with Qwen3-VL), while providing transparent verification traces for hallucination diagnosis and analysis -- e.g., both the integrated self-refinement module and grounding agent contributing an average +2.0% gain on POPE.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-17T15:30:47Z</published>\n    <arxiv:comment>16 pages, 11 figures, 5 tables</arxiv:comment>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Jiawei Mao</name>\n    </author>\n    <author>\n      <name>Hardy Chen</name>\n    </author>\n    <author>\n      <name>Haoqin Tu</name>\n    </author>\n    <author>\n      <name>Yuhan Wang</name>\n    </author>\n    <author>\n      <name>Letian Zhang</name>\n    </author>\n    <author>\n      <name>Zeyu Zheng</name>\n    </author>\n    <author>\n      <name>Huaxiu Yao</name>\n    </author>\n    <author>\n      <name>Zirui Wang</name>\n    </author>\n    <author>\n      <name>Cihang Xie</name>\n    </author>\n    <author>\n      <name>Yuyin Zhou</name>\n    </author>\n  </entry>"
}