Research

Paper

TESTING February 19, 2026

A Contrastive Variational AutoEncoder for NSCLC Survival Prediction with Missing Modalities

Authors

Michele Zanitti, Vanja Miskovic, Francesco Trovò, Alessandra Laura Giulia Pedrocchi, Ming Shen, Yan Kyaw Tun, Arsela Prelaj, Sokol Kosta

Abstract

Predicting survival outcomes for non-small cell lung cancer (NSCLC) patients is challenging due to the different individual prognostic features. This task can benefit from the integration of whole-slide images, bulk transcriptomics, and DNA methylation, which offer complementary views of the patient's condition at diagnosis. However, real-world clinical datasets are often incomplete, with entire modalities missing for a significant fraction of patients. State-of-the-art models rely on available data to create patient-level representations or use generative models to infer missing modalities, but they lack robustness in cases of severe missingness. We propose a Multimodal Contrastive Variational AutoEncoder (MCVAE) to address this issue: modality-specific variational encoders capture the uncertainty in each data source, and a fusion bottleneck with learned gating mechanisms is introduced to normalize the contributions from present modalities. We propose a multi-task objective that combines survival loss and reconstruction loss to regularize patient representations, along with a cross-modal contrastive loss that enforces cross-modal alignment in the latent space. During training, we apply stochastic modality masking to improve the robustness to arbitrary missingness patterns. Extensive evaluations on the TCGA-LUAD (n=475) and TCGA-LUSC (n=446) datasets demonstrate the efficacy of our approach in predicting disease-specific survival (DSS) and its robustness to severe missingness scenarios compared to two state-of-the-art models. Finally, we bring some clarifications on multimodal integration by testing our model on all subsets of modalities, finding that integration is not always beneficial to the task.

Metadata

arXiv ID: 2602.17402
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-02-19
Fetched: 2026-02-21 18:51

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.17402v1</id>\n    <title>A Contrastive Variational AutoEncoder for NSCLC Survival Prediction with Missing Modalities</title>\n    <updated>2026-02-19T14:29:34Z</updated>\n    <link href='https://arxiv.org/abs/2602.17402v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.17402v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Predicting survival outcomes for non-small cell lung cancer (NSCLC) patients is challenging due to the different individual prognostic features. This task can benefit from the integration of whole-slide images, bulk transcriptomics, and DNA methylation, which offer complementary views of the patient's condition at diagnosis. However, real-world clinical datasets are often incomplete, with entire modalities missing for a significant fraction of patients. State-of-the-art models rely on available data to create patient-level representations or use generative models to infer missing modalities, but they lack robustness in cases of severe missingness. We propose a Multimodal Contrastive Variational AutoEncoder (MCVAE) to address this issue: modality-specific variational encoders capture the uncertainty in each data source, and a fusion bottleneck with learned gating mechanisms is introduced to normalize the contributions from present modalities. We propose a multi-task objective that combines survival loss and reconstruction loss to regularize patient representations, along with a cross-modal contrastive loss that enforces cross-modal alignment in the latent space. During training, we apply stochastic modality masking to improve the robustness to arbitrary missingness patterns. Extensive evaluations on the TCGA-LUAD (n=475) and TCGA-LUSC (n=446) datasets demonstrate the efficacy of our approach in predicting disease-specific survival (DSS) and its robustness to severe missingness scenarios compared to two state-of-the-art models. Finally, we bring some clarifications on multimodal integration by testing our model on all subsets of modalities, finding that integration is not always beneficial to the task.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-02-19T14:29:34Z</published>\n    <arxiv:comment>Accepted at The 13th IEEE International Conference on Big Data (IEEE BigData 2025)</arxiv:comment>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Michele Zanitti</name>\n    </author>\n    <author>\n      <name>Vanja Miskovic</name>\n    </author>\n    <author>\n      <name>Francesco Trovò</name>\n    </author>\n    <author>\n      <name>Alessandra Laura Giulia Pedrocchi</name>\n    </author>\n    <author>\n      <name>Ming Shen</name>\n    </author>\n    <author>\n      <name>Yan Kyaw Tun</name>\n    </author>\n    <author>\n      <name>Arsela Prelaj</name>\n    </author>\n    <author>\n      <name>Sokol Kosta</name>\n    </author>\n  </entry>"
}