Research

Paper

AI LLM March 25, 2026

VOLMO: Versatile and Open Large Models for Ophthalmology

Authors

Zhenyue Qin, Younjoon Chung, Elijah Lee, Wanyue Feng, Xuguang Ai, Serina Applebaum, Minjie Zou, Yang Liu, Pan Xiao, Mac Singer, Amisha Dave, Aidan Gilson, Tiarnan D. L. Keenan, Emily Y. Chew, Zhiyong Lu, Yih-Chung Tham, Ron Adelman, Luciano V. Del Priore, Qingyu Chen

Abstract

Vision impairment affects millions globally, and early detection is critical to preventing irreversible vision loss. Ophthalmology workflows require clinicians to integrate medical images, structured clinical data, and free-text notes to determine disease severity and management, which is time-consuming and burdensome. Recent multimodal large language models (MLLMs) show promise, but existing general and medical MLLMs perform poorly in ophthalmology, and few ophthalmology-specific MLLMs are openly available. We present VOLMO (Versatile and Open Large Models for Ophthalmology), a model-agnostic, data-open framework for developing ophthalmology-specific MLLMs. VOLMO includes three stages: ophthalmology knowledge pretraining on 86,965 image-text pairs from 26,569 articles across 82 journals; domain task fine-tuning on 26,929 annotated instances spanning 12 eye conditions for disease screening and severity classification; and multi-step clinical reasoning on 913 patient case reports for assessment, planning, and follow-up care. Using this framework, we trained a compact 2B-parameter MLLM and compared it with strong baselines, including InternVL-2B, LLaVA-Med-7B, MedGemma-4B, MedGemma-27B, and RETFound. We evaluated these models on image description generation, disease screening and staging classification, and assessment-and-management generation, with additional manual review by two healthcare professionals and external validation on three independent cohorts for age-related macular degeneration and diabetic retinopathy. Across settings, VOLMO-2B consistently outperformed baselines, achieving stronger image description performance, an average F1 of 87.4% across 12 eye conditions, and higher scores in external validation.

Metadata

arXiv ID: 2603.23953
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-25
Fetched: 2026-03-26 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.23953v1</id>\n    <title>VOLMO: Versatile and Open Large Models for Ophthalmology</title>\n    <updated>2026-03-25T05:25:10Z</updated>\n    <link href='https://arxiv.org/abs/2603.23953v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.23953v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Vision impairment affects millions globally, and early detection is critical to preventing irreversible vision loss. Ophthalmology workflows require clinicians to integrate medical images, structured clinical data, and free-text notes to determine disease severity and management, which is time-consuming and burdensome. Recent multimodal large language models (MLLMs) show promise, but existing general and medical MLLMs perform poorly in ophthalmology, and few ophthalmology-specific MLLMs are openly available. We present VOLMO (Versatile and Open Large Models for Ophthalmology), a model-agnostic, data-open framework for developing ophthalmology-specific MLLMs. VOLMO includes three stages: ophthalmology knowledge pretraining on 86,965 image-text pairs from 26,569 articles across 82 journals; domain task fine-tuning on 26,929 annotated instances spanning 12 eye conditions for disease screening and severity classification; and multi-step clinical reasoning on 913 patient case reports for assessment, planning, and follow-up care. Using this framework, we trained a compact 2B-parameter MLLM and compared it with strong baselines, including InternVL-2B, LLaVA-Med-7B, MedGemma-4B, MedGemma-27B, and RETFound. We evaluated these models on image description generation, disease screening and staging classification, and assessment-and-management generation, with additional manual review by two healthcare professionals and external validation on three independent cohorts for age-related macular degeneration and diabetic retinopathy. Across settings, VOLMO-2B consistently outperformed baselines, achieving stronger image description performance, an average F1 of 87.4% across 12 eye conditions, and higher scores in external validation.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.ET'/>\n    <published>2026-03-25T05:25:10Z</published>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Zhenyue Qin</name>\n    </author>\n    <author>\n      <name>Younjoon Chung</name>\n    </author>\n    <author>\n      <name>Elijah Lee</name>\n    </author>\n    <author>\n      <name>Wanyue Feng</name>\n    </author>\n    <author>\n      <name>Xuguang Ai</name>\n    </author>\n    <author>\n      <name>Serina Applebaum</name>\n    </author>\n    <author>\n      <name>Minjie Zou</name>\n    </author>\n    <author>\n      <name>Yang Liu</name>\n    </author>\n    <author>\n      <name>Pan Xiao</name>\n    </author>\n    <author>\n      <name>Mac Singer</name>\n    </author>\n    <author>\n      <name>Amisha Dave</name>\n    </author>\n    <author>\n      <name>Aidan Gilson</name>\n    </author>\n    <author>\n      <name>Tiarnan D. L. Keenan</name>\n    </author>\n    <author>\n      <name>Emily Y. Chew</name>\n    </author>\n    <author>\n      <name>Zhiyong Lu</name>\n    </author>\n    <author>\n      <name>Yih-Chung Tham</name>\n    </author>\n    <author>\n      <name>Ron Adelman</name>\n    </author>\n    <author>\n      <name>Luciano V. Del Priore</name>\n    </author>\n    <author>\n      <name>Qingyu Chen</name>\n    </author>\n  </entry>"
}