Research

Paper

AI LLM March 02, 2026

Learning to Read Where to Look: Disease-Aware Vision-Language Pretraining for 3D CT

Authors

Simon Ging, Philipp Arnold, Sebastian Walter, Hani Alnahas, Hannah Bast, Elmar Kotter, Jiancheng Yang, Behzad Bozorgtabar, Thomas Brox

Abstract

Recent 3D CT vision-language models align volumes with reports via contrastive pretraining, but typically rely on limited public data and provide only coarse global supervision. We train a 3D CT vision-language model on 98k report-volume pairs (50k patients) collected at a single hospital, combined with public datasets, using SigLIP-style contrastive pretraining together with prompt-based disease supervision in the shared vision-text embedding space. On CT-RATE, our model achieves state-of-the-art text-to-image retrieval (R@10 31.5 vs. 22.2) and competitive disease classification (AUC 83.8 vs. 83.8), with consistent results on Rad-ChestCT (AUC 77.0 vs. 77.3). We further observe that radiologists routinely reference specific images within their reports (e.g., ``series X, image Y''), linking textual descriptions to precise axial locations. We automatically mine 262k such snippet-slice pairs and introduce the task of intra-scan snippet localization -- predicting the axial depth referred to by a text snippet -- reducing mean absolute error to 36.3 mm at 12 mm feature resolution, compared with 67.0 mm for the best baseline. Adding this localization objective leaves retrieval and classification broadly unchanged within confidence bounds, yielding a single unified model for retrieval, classification, and intra-scan grounding.

Metadata

arXiv ID: 2603.02026
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-02
Fetched: 2026-03-03 04:34

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.02026v1</id>\n    <title>Learning to Read Where to Look: Disease-Aware Vision-Language Pretraining for 3D CT</title>\n    <updated>2026-03-02T16:10:17Z</updated>\n    <link href='https://arxiv.org/abs/2603.02026v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.02026v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Recent 3D CT vision-language models align volumes with reports via contrastive pretraining, but typically rely on limited public data and provide only coarse global supervision. We train a 3D CT vision-language model on 98k report-volume pairs (50k patients) collected at a single hospital, combined with public datasets, using SigLIP-style contrastive pretraining together with prompt-based disease supervision in the shared vision-text embedding space. On CT-RATE, our model achieves state-of-the-art text-to-image retrieval (R@10 31.5 vs. 22.2) and competitive disease classification (AUC 83.8 vs. 83.8), with consistent results on Rad-ChestCT (AUC 77.0 vs. 77.3). We further observe that radiologists routinely reference specific images within their reports (e.g., ``series X, image Y''), linking textual descriptions to precise axial locations. We automatically mine 262k such snippet-slice pairs and introduce the task of intra-scan snippet localization -- predicting the axial depth referred to by a text snippet -- reducing mean absolute error to 36.3 mm at 12 mm feature resolution, compared with 67.0 mm for the best baseline. Adding this localization objective leaves retrieval and classification broadly unchanged within confidence bounds, yielding a single unified model for retrieval, classification, and intra-scan grounding.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-03-02T16:10:17Z</published>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Simon Ging</name>\n      <arxiv:affiliation>Computer Vision Group, University of Freiburg, Germany</arxiv:affiliation>\n      <arxiv:affiliation>Adaptive &amp; Agentic AI</arxiv:affiliation>\n    </author>\n    <author>\n      <name>Philipp Arnold</name>\n      <arxiv:affiliation>Department of Radiology, Medical Center -- University of Freiburg, Germany</arxiv:affiliation>\n    </author>\n    <author>\n      <name>Sebastian Walter</name>\n      <arxiv:affiliation>Chair of Algorithms and Data Structures, University of Freiburg, Germany</arxiv:affiliation>\n    </author>\n    <author>\n      <name>Hani Alnahas</name>\n      <arxiv:affiliation>Computer Vision Group, University of Freiburg, Germany</arxiv:affiliation>\n    </author>\n    <author>\n      <name>Hannah Bast</name>\n      <arxiv:affiliation>Chair of Algorithms and Data Structures, University of Freiburg, Germany</arxiv:affiliation>\n    </author>\n    <author>\n      <name>Elmar Kotter</name>\n      <arxiv:affiliation>Department of Radiology, Medical Center -- University of Freiburg, Germany</arxiv:affiliation>\n    </author>\n    <author>\n      <name>Jiancheng Yang</name>\n      <arxiv:affiliation>ELLIS Institute Finland</arxiv:affiliation>\n      <arxiv:affiliation>School of Electrical Engineering, Aalto University, Finland</arxiv:affiliation>\n    </author>\n    <author>\n      <name>Behzad Bozorgtabar</name>\n      <arxiv:affiliation>Adaptive &amp; Agentic AI</arxiv:affiliation>\n    </author>\n    <author>\n      <name>Thomas Brox</name>\n      <arxiv:affiliation>Computer Vision Group, University of Freiburg, Germany</arxiv:affiliation>\n    </author>\n  </entry>"
}