Research

Paper

TESTING March 11, 2026

Training-Free Multi-Step Inference for Target Speaker Extraction

Authors

Zhenghai You, Ying Shi, Lantian Li, Dong Wang

Abstract

Target speaker extraction (TSE) aims to recover a target speaker's speech from a mixture using a reference utterance as a cue. Most TSE systems adopt conditional auto-encoder architectures with one-step inference. Inspired by test-time scaling, we propose a training-free multi-step inference method that enables iterative refinement with a frozen pretrained model. At each step, new candidates are generated by interpolating the original mixture and the previous estimate, and the best candidate is selected for further refinement until convergence. Experiments show that, when ground-truth target speech is available, optimizing an intrusive metric (SI-SDRi) yields consistent gains across multiple evaluation metrics. Without ground truth, optimizing non-intrusive metrics (UTMOS or SpkSim) improves the corresponding metric but may hurt others. We therefore introduce joint metric optimization to balance these objectives, enabling controllable extraction preferences for practical deployment.

Metadata

arXiv ID: 2603.10921
Provider: ARXIV
Primary Category: cs.SD
Published: 2026-03-11
Fetched: 2026-03-12 04:21

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.10921v1</id>\n    <title>Training-Free Multi-Step Inference for Target Speaker Extraction</title>\n    <updated>2026-03-11T16:05:14Z</updated>\n    <link href='https://arxiv.org/abs/2603.10921v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.10921v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Target speaker extraction (TSE) aims to recover a target speaker's speech from a mixture using a reference utterance as a cue. Most TSE systems adopt conditional auto-encoder architectures with one-step inference. Inspired by test-time scaling, we propose a training-free multi-step inference method that enables iterative refinement with a frozen pretrained model. At each step, new candidates are generated by interpolating the original mixture and the previous estimate, and the best candidate is selected for further refinement until convergence. Experiments show that, when ground-truth target speech is available, optimizing an intrusive metric (SI-SDRi) yields consistent gains across multiple evaluation metrics. Without ground truth, optimizing non-intrusive metrics (UTMOS or SpkSim) improves the corresponding metric but may hurt others. We therefore introduce joint metric optimization to balance these objectives, enabling controllable extraction preferences for practical deployment.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.SD'/>\n    <published>2026-03-11T16:05:14Z</published>\n    <arxiv:primary_category term='cs.SD'/>\n    <author>\n      <name>Zhenghai You</name>\n    </author>\n    <author>\n      <name>Ying Shi</name>\n    </author>\n    <author>\n      <name>Lantian Li</name>\n    </author>\n    <author>\n      <name>Dong Wang</name>\n    </author>\n  </entry>"
}