Research

Paper

AI LLM March 06, 2026

Prosodic Boundary-Aware Streaming Generation for LLM-Based TTS with Streaming Text Input

Authors

Changsong Liu, Tianrui Wang, Ye Ni, Yizhou Peng, Eng Siong Chng

Abstract

Streaming TTS that receives streaming text is essential for interactive systems, yet this scheme faces two major challenges: unnatural prosody due to missing lookahead and long-form collapse due to unbounded context. We propose a prosodic-boundary-aware post-training strategy, adapting a pretrained LLM-based TTS model using weakly time-aligned data. Specifically, the model is adapted to learn early stopping at specified content boundaries when provided with limited future text. During inference, a sliding-window prompt carries forward previous text and speech tokens, ensuring bounded context and seamless concatenation. Evaluations show our method outperforms CosyVoice-Style interleaved baseline in both short and long-form scenarios. In long-text synthesis, especially, it achieves a 66.2% absolute reduction in word error rate (from 71.0% to 4.8%) and increases speaker and emotion similarity by 16.1% and 1.5% relatively, offering a robust solution for streaming TTS with incremental text.

Metadata

arXiv ID: 2603.06444
Provider: ARXIV
Primary Category: cs.SD
Published: 2026-03-06
Fetched: 2026-03-09 06:05

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.06444v1</id>\n    <title>Prosodic Boundary-Aware Streaming Generation for LLM-Based TTS with Streaming Text Input</title>\n    <updated>2026-03-06T16:36:51Z</updated>\n    <link href='https://arxiv.org/abs/2603.06444v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.06444v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Streaming TTS that receives streaming text is essential for interactive systems, yet this scheme faces two major challenges: unnatural prosody due to missing lookahead and long-form collapse due to unbounded context. We propose a prosodic-boundary-aware post-training strategy, adapting a pretrained LLM-based TTS model using weakly time-aligned data. Specifically, the model is adapted to learn early stopping at specified content boundaries when provided with limited future text. During inference, a sliding-window prompt carries forward previous text and speech tokens, ensuring bounded context and seamless concatenation. Evaluations show our method outperforms CosyVoice-Style interleaved baseline in both short and long-form scenarios. In long-text synthesis, especially, it achieves a 66.2% absolute reduction in word error rate (from 71.0% to 4.8%) and increases speaker and emotion similarity by 16.1% and 1.5% relatively, offering a robust solution for streaming TTS with incremental text.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.SD'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-06T16:36:51Z</published>\n    <arxiv:primary_category term='cs.SD'/>\n    <author>\n      <name>Changsong Liu</name>\n    </author>\n    <author>\n      <name>Tianrui Wang</name>\n    </author>\n    <author>\n      <name>Ye Ni</name>\n    </author>\n    <author>\n      <name>Yizhou Peng</name>\n    </author>\n    <author>\n      <name>Eng Siong Chng</name>\n    </author>\n  </entry>"
}