Research

Paper

AI LLM March 25, 2026

From Oracle to Noisy Context: Mitigating Contextual Exposure Bias in Speech-LLMs

Authors

Xiaoyong Guo, Nanjie Li, Zijie Zeng, Kai Wang, Hao Huang, Haihua Xu, Wei Shi

Abstract

Contextual automatic speech recognition (ASR) with Speech-LLMs is typically trained with oracle conversation history, but relies on error-prone history at inference, causing a train-test mismatch in the context channel that we term contextual exposure bias. We propose a unified training framework to improve robustness under realistic histories: (i) Teacher Error Knowledge by using Whisper large-v3 hypotheses as training-time history, (ii) Context Dropout to regularize over-reliance on history, and (iii) Direct Preference Optimization (DPO) on curated failure cases. Experiments on TED-LIUM 3 (in-domain) and zero-shot LibriSpeech (out-of-domain) show consistent gains under predicted-history decoding. With a two-utterance history as context, SFT with Whisper hypotheses reduce WER from 5.59% (oracle-history training) to 5.47%, and DPO further improves to 5.17%. Under irrelevant-context attacks, DPO yields the smallest degradation (5.17% -> 5.63%), indicating improved robustness to misleading context. Our code and models are published on https://github.com/XYGuo1996/Contextual_Speech_LLMs.

Metadata

arXiv ID: 2603.24034
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-25
Fetched: 2026-03-26 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.24034v1</id>\n    <title>From Oracle to Noisy Context: Mitigating Contextual Exposure Bias in Speech-LLMs</title>\n    <updated>2026-03-25T07:48:04Z</updated>\n    <link href='https://arxiv.org/abs/2603.24034v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.24034v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Contextual automatic speech recognition (ASR) with Speech-LLMs is typically trained with oracle conversation history, but relies on error-prone history at inference, causing a train-test mismatch in the context channel that we term contextual exposure bias. We propose a unified training framework to improve robustness under realistic histories: (i) Teacher Error Knowledge by using Whisper large-v3 hypotheses as training-time history, (ii) Context Dropout to regularize over-reliance on history, and (iii) Direct Preference Optimization (DPO) on curated failure cases. Experiments on TED-LIUM 3 (in-domain) and zero-shot LibriSpeech (out-of-domain) show consistent gains under predicted-history decoding. With a two-utterance history as context, SFT with Whisper hypotheses reduce WER from 5.59% (oracle-history training) to 5.47%, and DPO further improves to 5.17%. Under irrelevant-context attacks, DPO yields the smallest degradation (5.17% -&gt; 5.63%), indicating improved robustness to misleading context. Our code and models are published on https://github.com/XYGuo1996/Contextual_Speech_LLMs.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-25T07:48:04Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Xiaoyong Guo</name>\n    </author>\n    <author>\n      <name>Nanjie Li</name>\n    </author>\n    <author>\n      <name>Zijie Zeng</name>\n    </author>\n    <author>\n      <name>Kai Wang</name>\n    </author>\n    <author>\n      <name>Hao Huang</name>\n    </author>\n    <author>\n      <name>Haihua Xu</name>\n    </author>\n    <author>\n      <name>Wei Shi</name>\n    </author>\n  </entry>"
}