Research

Paper

AI LLM March 12, 2026

Resonate: Reinforcing Text-to-Audio Generation via Online Feedback from Large Audio Language Models

Authors

Xiquan Li, Junxi Liu, Wenxi Chen, Haina Zhu, Ziyang Ma, Xie Chen

Abstract

Reinforcement Learning (RL) has become an effective paradigm for enhancing Large Language Models (LLMs) and visual generative models. However, its application in text-to-audio (TTA) generation remains largely under-explored. Prior work typically employs offline methods like Direct Preference Optimization (DPO) and leverages Contrastive Language-Audio Pretraining (CLAP) models as reward functions. In this study, we investigate the integration of online Group Relative Policy Optimization (GRPO) into TTA generation. We adapt the algorithm for Flow Matching-based audio models and demonstrate that online RL significantly outperforms its offline counterparts. Furthermore, we incorporate rewards derived from Large Audio Language Models (LALMs), which can provide fine-grained scoring signals that are better aligned with human perception. With only 470M parameters, our final model, \textbf{Resonate}, establishes a new SOTA on TTA-Bench in terms of both audio quality and semantic alignment.

Metadata

arXiv ID: 2603.11661
Provider: ARXIV
Primary Category: cs.SD
Published: 2026-03-12
Fetched: 2026-03-14 05:03

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.11661v1</id>\n    <title>Resonate: Reinforcing Text-to-Audio Generation via Online Feedback from Large Audio Language Models</title>\n    <updated>2026-03-12T08:29:19Z</updated>\n    <link href='https://arxiv.org/abs/2603.11661v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.11661v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Reinforcement Learning (RL) has become an effective paradigm for enhancing Large Language Models (LLMs) and visual generative models. However, its application in text-to-audio (TTA) generation remains largely under-explored. Prior work typically employs offline methods like Direct Preference Optimization (DPO) and leverages Contrastive Language-Audio Pretraining (CLAP) models as reward functions. In this study, we investigate the integration of online Group Relative Policy Optimization (GRPO) into TTA generation. We adapt the algorithm for Flow Matching-based audio models and demonstrate that online RL significantly outperforms its offline counterparts. Furthermore, we incorporate rewards derived from Large Audio Language Models (LALMs), which can provide fine-grained scoring signals that are better aligned with human perception. With only 470M parameters, our final model, \\textbf{Resonate}, establishes a new SOTA on TTA-Bench in terms of both audio quality and semantic alignment.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.SD'/>\n    <published>2026-03-12T08:29:19Z</published>\n    <arxiv:primary_category term='cs.SD'/>\n    <author>\n      <name>Xiquan Li</name>\n    </author>\n    <author>\n      <name>Junxi Liu</name>\n    </author>\n    <author>\n      <name>Wenxi Chen</name>\n    </author>\n    <author>\n      <name>Haina Zhu</name>\n    </author>\n    <author>\n      <name>Ziyang Ma</name>\n    </author>\n    <author>\n      <name>Xie Chen</name>\n    </author>\n  </entry>"
}