Research

Paper

TESTING March 03, 2026

Rethinking Training Targets, Architectures and Data Quality for Universal Speech Enhancement

Authors

Szu-Wei Fu, Rong Chao, Xuesong Yang, Sung-Feng Huang, Ryandhimas E. Zezario, Rauf Nasretdinov, Ante Jukić, Yu Tsao, Yu-Chiang Frank Wang

Abstract

Universal Speech Enhancement (USE) aims to restore speech quality under diverse degradation conditions while preserving signal fidelity. Despite recent progress, key challenges in training target selection, the distortion--perception tradeoff, and data curation remain unresolved. In this work, we systematically address these three overlooked problems. First, we revisit the conventional practice of using early-reflected speech as the dereverberation target and show that it can degrade perceptual quality and downstream ASR performance. We instead demonstrate that time-shifted anechoic clean speech provides a superior learning target. Second, guided by the distortion--perception tradeoff theory, we propose a simple two-stage framework that achieves minimal distortion under a given level of perceptual quality. Third, we analyze the trade-off between training data scale and quality for USE, revealing that training on large uncurated corpora imposes a performance ceiling, as models struggle to remove subtle artifacts. Our method achieves state-of-the-art performance on the URGENT 2025 non-blind test set and exhibits strong language-agnostic generalization, making it effective for improving TTS training data. Code and models will be released upon acceptance.

Metadata

arXiv ID: 2603.02641
Provider: ARXIV
Primary Category: cs.SD
Published: 2026-03-03
Fetched: 2026-03-04 03:41

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.02641v1</id>\n    <title>Rethinking Training Targets, Architectures and Data Quality for Universal Speech Enhancement</title>\n    <updated>2026-03-03T06:15:36Z</updated>\n    <link href='https://arxiv.org/abs/2603.02641v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.02641v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Universal Speech Enhancement (USE) aims to restore speech quality under diverse degradation conditions while preserving signal fidelity. Despite recent progress, key challenges in training target selection, the distortion--perception tradeoff, and data curation remain unresolved. In this work, we systematically address these three overlooked problems. First, we revisit the conventional practice of using early-reflected speech as the dereverberation target and show that it can degrade perceptual quality and downstream ASR performance. We instead demonstrate that time-shifted anechoic clean speech provides a superior learning target. Second, guided by the distortion--perception tradeoff theory, we propose a simple two-stage framework that achieves minimal distortion under a given level of perceptual quality. Third, we analyze the trade-off between training data scale and quality for USE, revealing that training on large uncurated corpora imposes a performance ceiling, as models struggle to remove subtle artifacts. Our method achieves state-of-the-art performance on the URGENT 2025 non-blind test set and exhibits strong language-agnostic generalization, making it effective for improving TTS training data. Code and models will be released upon acceptance.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.SD'/>\n    <published>2026-03-03T06:15:36Z</published>\n    <arxiv:primary_category term='cs.SD'/>\n    <author>\n      <name>Szu-Wei Fu</name>\n    </author>\n    <author>\n      <name>Rong Chao</name>\n    </author>\n    <author>\n      <name>Xuesong Yang</name>\n    </author>\n    <author>\n      <name>Sung-Feng Huang</name>\n    </author>\n    <author>\n      <name>Ryandhimas E. Zezario</name>\n    </author>\n    <author>\n      <name>Rauf Nasretdinov</name>\n    </author>\n    <author>\n      <name>Ante Jukić</name>\n    </author>\n    <author>\n      <name>Yu Tsao</name>\n    </author>\n    <author>\n      <name>Yu-Chiang Frank Wang</name>\n    </author>\n  </entry>"
}