Research

Paper

AI LLM March 23, 2026

Multimodal Survival Analysis with Locally Deployable Large Language Models

Authors

Moritz Gögl, Christopher Yau

Abstract

We study multimodal survival analysis integrating clinical text, tabular covariates, and genomic profiles using locally deployable large language models (LLMs). As many institutions face tight computational and privacy constraints, this setting motivates the use of lightweight, on-premises models. Our approach jointly estimates calibrated survival probabilities and generates concise, evidence-grounded prognosis text via teacher-student distillation and principled multimodal fusion. On a TCGA cohort, it outperforms standard baselines, avoids reliance on cloud services and associated privacy concerns, and reduces the risk of hallucinated or miscalibrated estimates that can be observed in base LLMs.

Metadata

arXiv ID: 2603.22158
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-23
Fetched: 2026-03-24 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.22158v1</id>\n    <title>Multimodal Survival Analysis with Locally Deployable Large Language Models</title>\n    <updated>2026-03-23T16:21:37Z</updated>\n    <link href='https://arxiv.org/abs/2603.22158v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.22158v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>We study multimodal survival analysis integrating clinical text, tabular covariates, and genomic profiles using locally deployable large language models (LLMs). As many institutions face tight computational and privacy constraints, this setting motivates the use of lightweight, on-premises models. Our approach jointly estimates calibrated survival probabilities and generates concise, evidence-grounded prognosis text via teacher-student distillation and principled multimodal fusion. On a TCGA cohort, it outperforms standard baselines, avoids reliance on cloud services and associated privacy concerns, and reduces the risk of hallucinated or miscalibrated estimates that can be observed in base LLMs.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-23T16:21:37Z</published>\n    <arxiv:comment>NeurIPS 2025 Workshop on Multi-modal Foundation Models and Large Language Models for Life Sciences</arxiv:comment>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Moritz Gögl</name>\n    </author>\n    <author>\n      <name>Christopher Yau</name>\n    </author>\n  </entry>"
}