Research

Paper

AI LLM February 19, 2026

AudioChat: Unified Audio Storytelling, Editing, and Understanding with Transfusion Forcing

Authors

William Chen, Prem Seetharaman, Rithesh Kumar, Oriol Nieto, Shinji Watanabe, Justin Salamon, Zeyu Jin

Abstract

Despite recent breakthroughs, audio foundation models struggle in processing complex multi-source acoustic scenes. We refer to this challenging domain as audio stories, which can have multiple speakers and background/foreground sound effects. Compared to traditional audio processing tasks, audio stories introduce new layers of semantic, temporal, and physical complexity. To address this challenge, we propose AudioChat, a framework for developing audio foundation models that can generate, edit, and understand audio stories. AudioChat introduces a new paradigm in which LLM-based toolcalling agents simulate interactions between users and the system, and these simulated dialogues are used as training data. We also introduce a novel Audio Transfusion Forcing objective to train the AudioChat model, allowing it to simultaneously decompose high-level instructions via structured chain-of-thought reasoning and perform interactive multi-turn audio understanding/generation. To evaluate generation and editing performance, we develop three new metrics that directly measure task performance instead of relying upon distribution-based scoring. We highly encourage readers to visit our demo to better understand the capabilities of AudioChat: https://wanchichen.github.io/audiochat/.

Metadata

arXiv ID: 2602.17097
Provider: ARXIV
Primary Category: cs.SD
Published: 2026-02-19
Fetched: 2026-02-21 18:51

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.17097v1</id>\n    <title>AudioChat: Unified Audio Storytelling, Editing, and Understanding with Transfusion Forcing</title>\n    <updated>2026-02-19T05:45:27Z</updated>\n    <link href='https://arxiv.org/abs/2602.17097v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.17097v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Despite recent breakthroughs, audio foundation models struggle in processing complex multi-source acoustic scenes. We refer to this challenging domain as audio stories, which can have multiple speakers and background/foreground sound effects. Compared to traditional audio processing tasks, audio stories introduce new layers of semantic, temporal, and physical complexity. To address this challenge, we propose AudioChat, a framework for developing audio foundation models that can generate, edit, and understand audio stories. AudioChat introduces a new paradigm in which LLM-based toolcalling agents simulate interactions between users and the system, and these simulated dialogues are used as training data. We also introduce a novel Audio Transfusion Forcing objective to train the AudioChat model, allowing it to simultaneously decompose high-level instructions via structured chain-of-thought reasoning and perform interactive multi-turn audio understanding/generation. To evaluate generation and editing performance, we develop three new metrics that directly measure task performance instead of relying upon distribution-based scoring. We highly encourage readers to visit our demo to better understand the capabilities of AudioChat: https://wanchichen.github.io/audiochat/.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.SD'/>\n    <published>2026-02-19T05:45:27Z</published>\n    <arxiv:primary_category term='cs.SD'/>\n    <author>\n      <name>William Chen</name>\n    </author>\n    <author>\n      <name>Prem Seetharaman</name>\n    </author>\n    <author>\n      <name>Rithesh Kumar</name>\n    </author>\n    <author>\n      <name>Oriol Nieto</name>\n    </author>\n    <author>\n      <name>Shinji Watanabe</name>\n    </author>\n    <author>\n      <name>Justin Salamon</name>\n    </author>\n    <author>\n      <name>Zeyu Jin</name>\n    </author>\n  </entry>"
}