Research

Paper

TESTING March 10, 2026

MissBench: Benchmarking Multimodal Affective Analysis under Imbalanced Missing Modalities

Authors

Tien Anh Pham, Phuong-Anh Nguyen, Duc-Trong Le, Cam-Van Thi Nguyen

Abstract

Multimodal affective computing underpins key tasks such as sentiment analysis and emotion recognition. Standard evaluations, however, often assume that textual, acoustic, and visual modalities are equally available. In real applications, some modalities are systematically more fragile or expensive, creating imbalanced missing rates and training biases that task-level metrics alone do not reveal. We introduce MissBench, a benchmark and framework for multimodal affective tasks that standardizes both shared and imbalanced missing-rate protocols on four widely used sentiment and emotion datasets. MissBench also defines two diagnostic metrics. The Modality Equity Index (MEI) measures how fairly different modalities contribute across missing-modality configurations. The Modality Learning Index (MLI) quantifies optimization imbalance by comparing modality-specific gradient norms during training, aggregated across modality-related modules. Experiments on representative method families show that models that appear robust under shared missing rates can still exhibit marked modality inequity and optimization imbalance under imbalanced conditions. These findings position MissBench, together with MEI and MLI, as practical tools for stress-testing and analyzing multimodal affective models in realistic incomplete-modality settings.For reproducibility, we release our code at: https://anonymous.4open.science/r/MissBench-4098/

Metadata

arXiv ID: 2603.09874
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-10
Fetched: 2026-03-11 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.09874v1</id>\n    <title>MissBench: Benchmarking Multimodal Affective Analysis under Imbalanced Missing Modalities</title>\n    <updated>2026-03-10T16:36:45Z</updated>\n    <link href='https://arxiv.org/abs/2603.09874v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.09874v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Multimodal affective computing underpins key tasks such as sentiment analysis and emotion recognition. Standard evaluations, however, often assume that textual, acoustic, and visual modalities are equally available. In real applications, some modalities are systematically more fragile or expensive, creating imbalanced missing rates and training biases that task-level metrics alone do not reveal. We introduce MissBench, a benchmark and framework for multimodal affective tasks that standardizes both shared and imbalanced missing-rate protocols on four widely used sentiment and emotion datasets. MissBench also defines two diagnostic metrics. The Modality Equity Index (MEI) measures how fairly different modalities contribute across missing-modality configurations. The Modality Learning Index (MLI) quantifies optimization imbalance by comparing modality-specific gradient norms during training, aggregated across modality-related modules. Experiments on representative method families show that models that appear robust under shared missing rates can still exhibit marked modality inequity and optimization imbalance under imbalanced conditions. These findings position MissBench, together with MEI and MLI, as practical tools for stress-testing and analyzing multimodal affective models in realistic incomplete-modality settings.For reproducibility, we release our code at: https://anonymous.4open.science/r/MissBench-4098/</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <published>2026-03-10T16:36:45Z</published>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Tien Anh Pham</name>\n    </author>\n    <author>\n      <name>Phuong-Anh Nguyen</name>\n    </author>\n    <author>\n      <name>Duc-Trong Le</name>\n    </author>\n    <author>\n      <name>Cam-Van Thi Nguyen</name>\n    </author>\n  </entry>"
}