Research

Paper

TESTING March 13, 2026

Team LEYA in 10th ABAW Competition: Multimodal Ambivalence/Hesitancy Recognition Approach

Authors

Elena Ryumina, Alexandr Axyonov, Dmitry Sysoev, Timur Abdulkadirov, Kirill Almetov, Yulia Morozova, Dmitry Ryumin

Abstract

Ambivalence/hesitancy recognition in unconstrained videos is a challenging problem due to the subtle, multimodal, and context-dependent nature of this behavioral state. In this paper, a multimodal approach for video-level ambivalence/hesitancy recognition is presented for the 10th ABAW Competition. The proposed approach integrates four complementary modalities: scene, face, audio, and text. Scene dynamics are captured with a VideoMAE-based model, facial information is encoded through emotional frame-level embeddings aggregated by statistical pooling, acoustic representations are extracted with EmotionWav2Vec2.0 and processed by a Mamba-based temporal encoder, and linguistic cues are modeled using fine-tuned transformer-based text models. The resulting unimodal embeddings are further combined using multimodal fusion models, including prototype-augmented variants. Experiments on the BAH corpus demonstrate clear gains of multimodal fusion over all unimodal baselines. The best unimodal configuration achieved an average MF1 of 70.02%, whereas the best multimodal fusion model reached 83.25%. The highest final test performance, 71.43%, was obtained by an ensemble of five prototype-augmented fusion models. The obtained results highlight the importance of complementary multimodal cues and robust fusion strategies for ambivalence/hesitancy recognition.

Metadata

arXiv ID: 2603.12848
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-13
Fetched: 2026-03-16 06:01

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.12848v1</id>\n    <title>Team LEYA in 10th ABAW Competition: Multimodal Ambivalence/Hesitancy Recognition Approach</title>\n    <updated>2026-03-13T09:50:03Z</updated>\n    <link href='https://arxiv.org/abs/2603.12848v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.12848v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Ambivalence/hesitancy recognition in unconstrained videos is a challenging problem due to the subtle, multimodal, and context-dependent nature of this behavioral state. In this paper, a multimodal approach for video-level ambivalence/hesitancy recognition is presented for the 10th ABAW Competition. The proposed approach integrates four complementary modalities: scene, face, audio, and text. Scene dynamics are captured with a VideoMAE-based model, facial information is encoded through emotional frame-level embeddings aggregated by statistical pooling, acoustic representations are extracted with EmotionWav2Vec2.0 and processed by a Mamba-based temporal encoder, and linguistic cues are modeled using fine-tuned transformer-based text models. The resulting unimodal embeddings are further combined using multimodal fusion models, including prototype-augmented variants. Experiments on the BAH corpus demonstrate clear gains of multimodal fusion over all unimodal baselines. The best unimodal configuration achieved an average MF1 of 70.02%, whereas the best multimodal fusion model reached 83.25%. The highest final test performance, 71.43%, was obtained by an ensemble of five prototype-augmented fusion models. The obtained results highlight the importance of complementary multimodal cues and robust fusion strategies for ambivalence/hesitancy recognition.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-13T09:50:03Z</published>\n    <arxiv:comment>8 pages, 2 figures</arxiv:comment>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Elena Ryumina</name>\n      <arxiv:affiliation>St. Petersburg Federal Research Center of the Russian Academy of Sciences, St. Petersburg, Russia</arxiv:affiliation>\n    </author>\n    <author>\n      <name>Alexandr Axyonov</name>\n      <arxiv:affiliation>St. Petersburg Federal Research Center of the Russian Academy of Sciences, St. Petersburg, Russia</arxiv:affiliation>\n    </author>\n    <author>\n      <name>Dmitry Sysoev</name>\n      <arxiv:affiliation>HSE University, St. Petersburg, Russia</arxiv:affiliation>\n    </author>\n    <author>\n      <name>Timur Abdulkadirov</name>\n      <arxiv:affiliation>HSE University, St. Petersburg, Russia</arxiv:affiliation>\n    </author>\n    <author>\n      <name>Kirill Almetov</name>\n      <arxiv:affiliation>HSE University, St. Petersburg, Russia</arxiv:affiliation>\n    </author>\n    <author>\n      <name>Yulia Morozova</name>\n      <arxiv:affiliation>HSE University, St. Petersburg, Russia</arxiv:affiliation>\n    </author>\n    <author>\n      <name>Dmitry Ryumin</name>\n      <arxiv:affiliation>St. Petersburg Federal Research Center of the Russian Academy of Sciences, St. Petersburg, Russia</arxiv:affiliation>\n      <arxiv:affiliation>HSE University, St. Petersburg, Russia</arxiv:affiliation>\n    </author>\n  </entry>"
}