Research

Paper

TESTING March 10, 2026

Finetuning a Text-to-Audio Model for Room Impulse Response Generation

Authors

Kirak Kim, Sungyoung Kim

Abstract

Room Impulse Responses (RIRs) enable realistic acoustic simulation, with applications ranging from multimedia production to speech data augmentation. However, acquiring high-quality real-world RIRs is labor-intensive, and data scarcity remains a challenge for data-driven RIR generation approaches. In this paper, we propose a novel approach to RIR generation by fine-tuning a pre-trained text-to-audio model, demonstrating for the first time that large-scale generative audio priors can be effectively leveraged for the task. To address the lack of text-RIR paired data, we establish a labeling pipeline utilizing vision-language models to extract acoustic descriptions from existing image-RIR datasets. We introduce an in-context learning strategy to accommodate free-form user prompts during inference. Evaluations involving MUSHRA listening tests and downstream ASR performance demonstrate that our model generates plausible RIRs and serves as an effective tool for speech data augmentation.

Metadata

arXiv ID: 2603.09708
Provider: ARXIV
Primary Category: eess.AS
Published: 2026-03-10
Fetched: 2026-03-11 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.09708v1</id>\n    <title>Finetuning a Text-to-Audio Model for Room Impulse Response Generation</title>\n    <updated>2026-03-10T14:17:42Z</updated>\n    <link href='https://arxiv.org/abs/2603.09708v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.09708v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Room Impulse Responses (RIRs) enable realistic acoustic simulation, with applications ranging from multimedia production to speech data augmentation. However, acquiring high-quality real-world RIRs is labor-intensive, and data scarcity remains a challenge for data-driven RIR generation approaches. In this paper, we propose a novel approach to RIR generation by fine-tuning a pre-trained text-to-audio model, demonstrating for the first time that large-scale generative audio priors can be effectively leveraged for the task. To address the lack of text-RIR paired data, we establish a labeling pipeline utilizing vision-language models to extract acoustic descriptions from existing image-RIR datasets. We introduce an in-context learning strategy to accommodate free-form user prompts during inference. Evaluations involving MUSHRA listening tests and downstream ASR performance demonstrate that our model generates plausible RIRs and serves as an effective tool for speech data augmentation.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='eess.AS'/>\n    <published>2026-03-10T14:17:42Z</published>\n    <arxiv:comment>5 pages, 2 figures, submitted to Interspeech 2026</arxiv:comment>\n    <arxiv:primary_category term='eess.AS'/>\n    <author>\n      <name>Kirak Kim</name>\n    </author>\n    <author>\n      <name>Sungyoung Kim</name>\n    </author>\n  </entry>"
}