Research

Paper

AI LLM March 10, 2026

ALARM: Audio-Language Alignment for Reasoning Models

Authors

Petr Grinberg, Hassan Shahmohammadi

Abstract

Large audio language models (ALMs) extend LLMs with auditory understanding. A common approach freezes the LLM and trains only an adapter on self-generated targets. However, this fails for reasoning LLMs (RLMs) whose built-in chain-of-thought traces expose the textual surrogate input, yielding unnatural responses. We propose self-rephrasing, converting self-generated responses into audio-understanding variants compatible with RLMs while preserving distributional alignment. We further fuse and compress multiple audio encoders for stronger representations. For training, we construct a 6M-instance multi-task corpus (2.5M unique prompts) spanning 19K hours of speech, music, and sound. Our 4B-parameter ALM outperforms similarly sized models and surpasses most larger ALMs on related audio-reasoning benchmarks, while preserving textual capabilities with a low training cost. Notably, we achieve the best open-source result on the MMAU-speech and MMSU benchmarks and rank third among all the models.

Metadata

arXiv ID: 2603.09556
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-10
Fetched: 2026-03-11 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.09556v1</id>\n    <title>ALARM: Audio-Language Alignment for Reasoning Models</title>\n    <updated>2026-03-10T12:03:25Z</updated>\n    <link href='https://arxiv.org/abs/2603.09556v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.09556v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large audio language models (ALMs) extend LLMs with auditory understanding. A common approach freezes the LLM and trains only an adapter on self-generated targets. However, this fails for reasoning LLMs (RLMs) whose built-in chain-of-thought traces expose the textual surrogate input, yielding unnatural responses. We propose self-rephrasing, converting self-generated responses into audio-understanding variants compatible with RLMs while preserving distributional alignment. We further fuse and compress multiple audio encoders for stronger representations. For training, we construct a 6M-instance multi-task corpus (2.5M unique prompts) spanning 19K hours of speech, music, and sound. Our 4B-parameter ALM outperforms similarly sized models and surpasses most larger ALMs on related audio-reasoning benchmarks, while preserving textual capabilities with a low training cost. Notably, we achieve the best open-source result on the MMAU-speech and MMSU benchmarks and rank third among all the models.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-10T12:03:25Z</published>\n    <arxiv:comment>Submitted to Interspeech2026</arxiv:comment>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Petr Grinberg</name>\n    </author>\n    <author>\n      <name>Hassan Shahmohammadi</name>\n    </author>\n  </entry>"
}