Research

Paper

AI LLM March 03, 2026

An Empirical Analysis of Calibration and Selective Prediction in Multimodal Clinical Condition Classification

Authors

L. Julián Lechuga López, Farah E. Shamout, Tim G. J. Rudner

Abstract

As artificial intelligence systems move toward clinical deployment, ensuring reliable prediction behavior is fundamental for safety-critical decision-making tasks. One proposed safeguard is selective prediction, where models can defer uncertain predictions to human experts for review. In this work, we empirically evaluate the reliability of uncertainty-based selective prediction in multilabel clinical condition classification using multimodal ICU data. Across a range of state-of-the-art unimodal and multimodal models, we find that selective prediction can substantially degrade performance despite strong standard evaluation metrics. This failure is driven by severe class-dependent miscalibration, whereby models assign high uncertainty to correct predictions and low uncertainty to incorrect ones, particularly for underrepresented clinical conditions. Our results show that commonly used aggregate metrics can obscure these effects, limiting their ability to assess selective prediction behavior in this setting. Taken together, our findings characterize a task-specific failure mode of selective prediction in multimodal clinical condition classification and highlight the need for calibration-aware evaluation to provide strong guarantees of safety and robustness in clinical AI.

Metadata

arXiv ID: 2603.02719
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-03
Fetched: 2026-03-04 03:41

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.02719v1</id>\n    <title>An Empirical Analysis of Calibration and Selective Prediction in Multimodal Clinical Condition Classification</title>\n    <updated>2026-03-03T08:16:44Z</updated>\n    <link href='https://arxiv.org/abs/2603.02719v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.02719v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>As artificial intelligence systems move toward clinical deployment, ensuring reliable prediction behavior is fundamental for safety-critical decision-making tasks. One proposed safeguard is selective prediction, where models can defer uncertain predictions to human experts for review. In this work, we empirically evaluate the reliability of uncertainty-based selective prediction in multilabel clinical condition classification using multimodal ICU data. Across a range of state-of-the-art unimodal and multimodal models, we find that selective prediction can substantially degrade performance despite strong standard evaluation metrics. This failure is driven by severe class-dependent miscalibration, whereby models assign high uncertainty to correct predictions and low uncertainty to incorrect ones, particularly for underrepresented clinical conditions. Our results show that commonly used aggregate metrics can obscure these effects, limiting their ability to assess selective prediction behavior in this setting. Taken together, our findings characterize a task-specific failure mode of selective prediction in multimodal clinical condition classification and highlight the need for calibration-aware evaluation to provide strong guarantees of safety and robustness in clinical AI.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-03-03T08:16:44Z</published>\n    <arxiv:comment>33 pages, 14 figures, 8 tables</arxiv:comment>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>L. Julián Lechuga López</name>\n    </author>\n    <author>\n      <name>Farah E. Shamout</name>\n    </author>\n    <author>\n      <name>Tim G. J. Rudner</name>\n    </author>\n  </entry>"
}