Research

Paper

TESTING February 26, 2026

Deepfake Word Detection by Next-token Prediction using Fine-tuned Whisper

Authors

Hoan My Tran, Xin Wang, Wanying Ge, Xuechen Liu, Junichi Yamagishi

Abstract

Deepfake speech utterances can be forged by replacing one or more words in a bona fide utterance with semantically different words synthesized by speech generative models. While a dedicated synthetic word detector could be developed, we investigate a cost-effective method that fine-tunes a pre-trained Whisper model to detect synthetic words while transcribing the input utterance via next-token prediction. We further investigate using partially vocoded utterances as the fine-tuning data, thereby reducing the cost of data collection. Our experiments demonstrate that, on in-domain test data, the fine-tuned Whisper yields low synthetic-word detection error rates and transcription error rates. On out-of-domain test data with synthetic words produced by unseen speech generative models, the fine-tuned Whisper remains on par with a dedicated ResNet-based detection model; however, the overall performance degradation calls for strategies to improve its generalization capability.

Metadata

arXiv ID: 2602.22658
Provider: ARXIV
Primary Category: eess.AS
Published: 2026-02-26
Fetched: 2026-02-27 04:35

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.22658v1</id>\n    <title>Deepfake Word Detection by Next-token Prediction using Fine-tuned Whisper</title>\n    <updated>2026-02-26T06:17:56Z</updated>\n    <link href='https://arxiv.org/abs/2602.22658v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.22658v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Deepfake speech utterances can be forged by replacing one or more words in a bona fide utterance with semantically different words synthesized by speech generative models. While a dedicated synthetic word detector could be developed, we investigate a cost-effective method that fine-tunes a pre-trained Whisper model to detect synthetic words while transcribing the input utterance via next-token prediction. We further investigate using partially vocoded utterances as the fine-tuning data, thereby reducing the cost of data collection. Our experiments demonstrate that, on in-domain test data, the fine-tuned Whisper yields low synthetic-word detection error rates and transcription error rates. On out-of-domain test data with synthetic words produced by unseen speech generative models, the fine-tuned Whisper remains on par with a dedicated ResNet-based detection model; however, the overall performance degradation calls for strategies to improve its generalization capability.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='eess.AS'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-02-26T06:17:56Z</published>\n    <arxiv:primary_category term='eess.AS'/>\n    <author>\n      <name>Hoan My Tran</name>\n    </author>\n    <author>\n      <name>Xin Wang</name>\n    </author>\n    <author>\n      <name>Wanying Ge</name>\n    </author>\n    <author>\n      <name>Xuechen Liu</name>\n    </author>\n    <author>\n      <name>Junichi Yamagishi</name>\n    </author>\n  </entry>"
}