Research

Paper

AI LLM March 02, 2026

Voices, Faces, and Feelings: Multi-modal Emotion-Cognition Captioning for Mental Health Understanding

Authors

Zhiyuan Zhou, Yanrong Guo, Shijie Hao

Abstract

Emotional and cognitive factors are essential for understanding mental health disorders. However, existing methods often treat multi-modal data as classification tasks, limiting interpretability especially for emotion and cognition. Although large language models (LLMs) offer opportunities for mental health analysis, they mainly rely on textual semantics and overlook fine-grained emotional and cognitive cues in multi-modal inputs. While some studies incorporate emotional features via transfer learning, their connection to mental health conditions remains implicit. To address these issues, we propose ECMC, a novel task that aims at generating natural language descriptions of emotional and cognitive states from multi-modal data, and producing emotion-cognition profiles that improve both the accuracy and interpretability of mental health assessments. We adopt an encoder-decoder architecture, where modality-specific encoders extract features, which are fused by a dual-stream BridgeNet based on Q-former. Contrastive learning enhances the extraction of emotional and cognitive features. A LLaMA decoder then aligns these features with annotated captions to produce detailed descriptions. Extensive objective and subjective evaluations demonstrate that: 1) ECMC outperforms existing multi-modal LLMs and mental health models in generating emotion-cognition captions; 2) the generated emotion-cognition profiles significantly improve assistive diagnosis and interpretability in mental health analysis.

Metadata

arXiv ID: 2603.01816
Provider: ARXIV
Primary Category: cs.MM
Published: 2026-03-02
Fetched: 2026-03-03 04:34

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.01816v1</id>\n    <title>Voices, Faces, and Feelings: Multi-modal Emotion-Cognition Captioning for Mental Health Understanding</title>\n    <updated>2026-03-02T12:51:07Z</updated>\n    <link href='https://arxiv.org/abs/2603.01816v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.01816v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Emotional and cognitive factors are essential for understanding mental health disorders. However, existing methods often treat multi-modal data as classification tasks, limiting interpretability especially for emotion and cognition. Although large language models (LLMs) offer opportunities for mental health analysis, they mainly rely on textual semantics and overlook fine-grained emotional and cognitive cues in multi-modal inputs. While some studies incorporate emotional features via transfer learning, their connection to mental health conditions remains implicit. To address these issues, we propose ECMC, a novel task that aims at generating natural language descriptions of emotional and cognitive states from multi-modal data, and producing emotion-cognition profiles that improve both the accuracy and interpretability of mental health assessments. We adopt an encoder-decoder architecture, where modality-specific encoders extract features, which are fused by a dual-stream BridgeNet based on Q-former. Contrastive learning enhances the extraction of emotional and cognitive features. A LLaMA decoder then aligns these features with annotated captions to produce detailed descriptions. Extensive objective and subjective evaluations demonstrate that: 1) ECMC outperforms existing multi-modal LLMs and mental health models in generating emotion-cognition captions; 2) the generated emotion-cognition profiles significantly improve assistive diagnosis and interpretability in mental health analysis.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.MM'/>\n    <published>2026-03-02T12:51:07Z</published>\n    <arxiv:comment>Accepted at AAAI 2026</arxiv:comment>\n    <arxiv:primary_category term='cs.MM'/>\n    <author>\n      <name>Zhiyuan Zhou</name>\n    </author>\n    <author>\n      <name>Yanrong Guo</name>\n    </author>\n    <author>\n      <name>Shijie Hao</name>\n    </author>\n  </entry>"
}