Research

Paper

AI LLM February 19, 2026

Understanding the Fine-Grained Knowledge Capabilities of Vision-Language Models

Authors

Dhruba Ghosh, Yuhui Zhang, Ludwig Schmidt

Abstract

Vision-language models (VLMs) have made substantial progress across a wide range of visual question answering benchmarks, spanning visual reasoning, document understanding, and multimodal dialogue. These improvements are evident in a wide range of VLMs built on a variety of base models, alignment architectures, and training data. However, recent works show that these models trail behind in traditional image classification benchmarks, which test fine-grained visual knowledge. We test a large number of recent VLMs on fine-grained classification benchmarks and identify potential factors in the disconnect between fine-grained knowledge and other vision benchmarks. Through a series of ablation experiments, we find that using a better LLM improves all benchmark scores equally, while a better vision encoder disproportionately improves fine-grained classification performance. Furthermore, we find that the pretraining stage is also vital to fine-grained performance, particularly when the language model weights are unfrozen during pretraining. These insights pave the way for enhancing fine-grained visual understanding and vision-centric capabilities in VLMs.

Metadata

arXiv ID: 2602.17871
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-02-19
Fetched: 2026-02-23 05:33

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.17871v1</id>\n    <title>Understanding the Fine-Grained Knowledge Capabilities of Vision-Language Models</title>\n    <updated>2026-02-19T22:07:29Z</updated>\n    <link href='https://arxiv.org/abs/2602.17871v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.17871v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Vision-language models (VLMs) have made substantial progress across a wide range of visual question answering benchmarks, spanning visual reasoning, document understanding, and multimodal dialogue. These improvements are evident in a wide range of VLMs built on a variety of base models, alignment architectures, and training data. However, recent works show that these models trail behind in traditional image classification benchmarks, which test fine-grained visual knowledge. We test a large number of recent VLMs on fine-grained classification benchmarks and identify potential factors in the disconnect between fine-grained knowledge and other vision benchmarks. Through a series of ablation experiments, we find that using a better LLM improves all benchmark scores equally, while a better vision encoder disproportionately improves fine-grained classification performance. Furthermore, we find that the pretraining stage is also vital to fine-grained performance, particularly when the language model weights are unfrozen during pretraining. These insights pave the way for enhancing fine-grained visual understanding and vision-centric capabilities in VLMs.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.MM'/>\n    <published>2026-02-19T22:07:29Z</published>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Dhruba Ghosh</name>\n    </author>\n    <author>\n      <name>Yuhui Zhang</name>\n    </author>\n    <author>\n      <name>Ludwig Schmidt</name>\n    </author>\n  </entry>"
}