Research

Paper

TESTING March 24, 2026

Multi-Modal Image Fusion via Intervention-Stable Feature Learning

Authors

Xue Wang, Zheng Guan, Wenhua Qian, Chengchao Wang, Runzhuo Ma

Abstract

Multi-modal image fusion integrates complementary information from different modalities into a unified representation. Current methods predominantly optimize statistical correlations between modalities, often capturing dataset-induced spurious associations that degrade under distribution shifts. In this paper, we propose an intervention-based framework inspired by causal principles to identify robust cross-modal dependencies. Drawing insights from Pearl's causal hierarchy, we design three principled intervention strategies to probe different aspects of modal relationships: i) complementary masking with spatially disjoint perturbations tests whether modalities can genuinely compensate for each other's missing information, ii) random masking of identical regions identifies feature subsets that remain informative under partial observability, and iii) modality dropout evaluates the irreplaceable contribution of each modality. Based on these interventions, we introduce a Causal Feature Integrator (CFI) that learns to identify and prioritize intervention-stable features maintaining importance across different perturbation patterns through adaptive invariance gating, thereby capturing robust modal dependencies rather than spurious correlations. Extensive experiments demonstrate that our method achieves SOTA performance on both public benchmarks and downstream high-level vision tasks.

Metadata

arXiv ID: 2603.23272
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-24
Fetched: 2026-03-25 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.23272v1</id>\n    <title>Multi-Modal Image Fusion via Intervention-Stable Feature Learning</title>\n    <updated>2026-03-24T14:36:24Z</updated>\n    <link href='https://arxiv.org/abs/2603.23272v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.23272v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Multi-modal image fusion integrates complementary information from different modalities into a unified representation. Current methods predominantly optimize statistical correlations between modalities, often capturing dataset-induced spurious associations that degrade under distribution shifts. In this paper, we propose an intervention-based framework inspired by causal principles to identify robust cross-modal dependencies. Drawing insights from Pearl's causal hierarchy, we design three principled intervention strategies to probe different aspects of modal relationships: i) complementary masking with spatially disjoint perturbations tests whether modalities can genuinely compensate for each other's missing information, ii) random masking of identical regions identifies feature subsets that remain informative under partial observability, and iii) modality dropout evaluates the irreplaceable contribution of each modality. Based on these interventions, we introduce a Causal Feature Integrator (CFI) that learns to identify and prioritize intervention-stable features maintaining importance across different perturbation patterns through adaptive invariance gating, thereby capturing robust modal dependencies rather than spurious correlations. Extensive experiments demonstrate that our method achieves SOTA performance on both public benchmarks and downstream high-level vision tasks.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.MM'/>\n    <published>2026-03-24T14:36:24Z</published>\n    <arxiv:comment>Accpted by CVPR 2026</arxiv:comment>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Xue Wang</name>\n    </author>\n    <author>\n      <name>Zheng Guan</name>\n    </author>\n    <author>\n      <name>Wenhua Qian</name>\n    </author>\n    <author>\n      <name>Chengchao Wang</name>\n    </author>\n    <author>\n      <name>Runzhuo Ma</name>\n    </author>\n  </entry>"
}