Research

Paper

AI LLM March 06, 2026

How Well Do Current Speech Deepfake Detection Methods Generalize to the Real World?

Authors

Daixian Li, Jun Xue, Yanzhen Ren, Zhuolin Yi, Yihuan Huang, Guanxiang Feng, Yi Chai

Abstract

Recent advances in speech synthesis and voice conversion have greatly improved the naturalness and authenticity of generated audio. Meanwhile, evolving encoding, compression, and transmission mechanisms on social media platforms further obscure deepfake artifacts. These factors complicate reliable detection in real-world environments, underscoring the need for representative evaluation benchmarks. To this end, we introduce ML-ITW (Multilingual In-The-Wild), a multilingual dataset covering 14 languages, seven major platforms, and 180 public figures, totaling 28.39 hours of audio. We evaluate three detection paradigms: end-to-end neural models, self-supervised feature-based (SSL) methods, and audio large language models (Audio LLMs). Experimental results reveal significant performance degradation across diverse languages and real-world acoustic conditions, highlighting the limited generalization ability of existing detectors in practical scenarios. The ML-ITW dataset is publicly available.

Metadata

arXiv ID: 2603.05852
Provider: ARXIV
Primary Category: cs.SD
Published: 2026-03-06
Fetched: 2026-03-09 06:05

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.05852v1</id>\n    <title>How Well Do Current Speech Deepfake Detection Methods Generalize to the Real World?</title>\n    <updated>2026-03-06T03:18:16Z</updated>\n    <link href='https://arxiv.org/abs/2603.05852v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.05852v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Recent advances in speech synthesis and voice conversion have greatly improved the naturalness and authenticity of generated audio. Meanwhile, evolving encoding, compression, and transmission mechanisms on social media platforms further obscure deepfake artifacts. These factors complicate reliable detection in real-world environments, underscoring the need for representative evaluation benchmarks. To this end, we introduce ML-ITW (Multilingual In-The-Wild), a multilingual dataset covering 14 languages, seven major platforms, and 180 public figures, totaling 28.39 hours of audio. We evaluate three detection paradigms: end-to-end neural models, self-supervised feature-based (SSL) methods, and audio large language models (Audio LLMs). Experimental results reveal significant performance degradation across diverse languages and real-world acoustic conditions, highlighting the limited generalization ability of existing detectors in practical scenarios. The ML-ITW dataset is publicly available.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.SD'/>\n    <published>2026-03-06T03:18:16Z</published>\n    <arxiv:comment>Submitted to Interspeech 2026</arxiv:comment>\n    <arxiv:primary_category term='cs.SD'/>\n    <author>\n      <name>Daixian Li</name>\n    </author>\n    <author>\n      <name>Jun Xue</name>\n    </author>\n    <author>\n      <name>Yanzhen Ren</name>\n    </author>\n    <author>\n      <name>Zhuolin Yi</name>\n    </author>\n    <author>\n      <name>Yihuan Huang</name>\n    </author>\n    <author>\n      <name>Guanxiang Feng</name>\n    </author>\n    <author>\n      <name>Yi Chai</name>\n    </author>\n  </entry>"
}