Research

Paper

AI LLM February 25, 2026

EmoOmni: Bridging Emotional Understanding and Expression in Omni-Modal LLMs

Authors

Wenjie Tian, Zhixian Zhao, Jingbin Hu, Huakang Chen, Haohe Liu, Binshen Mu, Lei Xie

Abstract

The evolution of Omni-Modal Large Language Models~(Omni-LLMs) has revolutionized human--computer interaction, enabling unified audio-visual perception and speech response. However, existing Omni-LLMs struggle with complex real-world scenarios, often leading to superficial understanding and contextually mismatched emotional responses. This issue is further intensified by Omni-LLM's Thinker-Talker architectures, which are implicitly connected through hidden states, leading to the loss of emotional details. In this work, we present EmoOmni, a unified framework for accurate understanding and expression in multimodal emotional dialogue. At its core, we introduce the emotional Chain-of-Thought~(E-CoT), which enforces a reasoning from fine-grained multimodal perception to textual response. Moreover, we explicitly treat E-CoT as high-level emotional instructions that guide the talker, enabling accurate emotional expression. Complementing the model, we construct EmoOmniPipe to obtain the real-world annotated dialogue data and establish a benchmark, EmoOmniEval, to facilitate systematic assessment of multimodal emotional dialogue task. Experiments show that EmoOmni-7B achieves comparable performance with Qwen3Omni-30B-A3B-Thinking under the same talker.

Metadata

arXiv ID: 2602.21900
Provider: ARXIV
Primary Category: cs.SD
Published: 2026-02-25
Fetched: 2026-02-26 05:00

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.21900v1</id>\n    <title>EmoOmni: Bridging Emotional Understanding and Expression in Omni-Modal LLMs</title>\n    <updated>2026-02-25T13:30:27Z</updated>\n    <link href='https://arxiv.org/abs/2602.21900v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.21900v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>The evolution of Omni-Modal Large Language Models~(Omni-LLMs) has revolutionized human--computer interaction, enabling unified audio-visual perception and speech response. However, existing Omni-LLMs struggle with complex real-world scenarios, often leading to superficial understanding and contextually mismatched emotional responses. This issue is further intensified by Omni-LLM's Thinker-Talker architectures, which are implicitly connected through hidden states, leading to the loss of emotional details. In this work, we present EmoOmni, a unified framework for accurate understanding and expression in multimodal emotional dialogue. At its core, we introduce the emotional Chain-of-Thought~(E-CoT), which enforces a reasoning from fine-grained multimodal perception to textual response. Moreover, we explicitly treat E-CoT as high-level emotional instructions that guide the talker, enabling accurate emotional expression. Complementing the model, we construct EmoOmniPipe to obtain the real-world annotated dialogue data and establish a benchmark, EmoOmniEval, to facilitate systematic assessment of multimodal emotional dialogue task. Experiments show that EmoOmni-7B achieves comparable performance with Qwen3Omni-30B-A3B-Thinking under the same talker.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.SD'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='eess.AS'/>\n    <published>2026-02-25T13:30:27Z</published>\n    <arxiv:primary_category term='cs.SD'/>\n    <author>\n      <name>Wenjie Tian</name>\n    </author>\n    <author>\n      <name>Zhixian Zhao</name>\n    </author>\n    <author>\n      <name>Jingbin Hu</name>\n    </author>\n    <author>\n      <name>Huakang Chen</name>\n    </author>\n    <author>\n      <name>Haohe Liu</name>\n    </author>\n    <author>\n      <name>Binshen Mu</name>\n    </author>\n    <author>\n      <name>Lei Xie</name>\n    </author>\n  </entry>"
}