Research

Paper

TESTING March 19, 2026

DiscoPhon: Benchmarking the Unsupervised Discovery of Phoneme Inventories With Discrete Speech Units

Authors

Maxime Poli, Manel Khentout, Angelo Ortiz Tandazo, Ewan Dunbar, Emmanuel Chemla, Emmanuel Dupoux

Abstract

We introduce DiscoPhon, a multilingual benchmark for evaluating unsupervised phoneme discovery from discrete speech units. DiscoPhon covers 6 dev and 6 test languages, chosen to span a wide range of phonemic contrasts. Given only 10 hours of speech in a previously unseen language, systems must produce discrete units that are mapped to a predefined phoneme inventory, through either a many-to-one or a one-to-one assignment. The resulting sequences are evaluated for unit quality, recognition and segmentation. We provide four pretrained multilingual HuBERT and SpidR baselines, and show that phonemic information is available enough in current models for derived units to correlate well with phonemes, though with variations across languages.

Metadata

arXiv ID: 2603.18612
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-19
Fetched: 2026-03-20 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.18612v1</id>\n    <title>DiscoPhon: Benchmarking the Unsupervised Discovery of Phoneme Inventories With Discrete Speech Units</title>\n    <updated>2026-03-19T08:31:58Z</updated>\n    <link href='https://arxiv.org/abs/2603.18612v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.18612v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>We introduce DiscoPhon, a multilingual benchmark for evaluating unsupervised phoneme discovery from discrete speech units. DiscoPhon covers 6 dev and 6 test languages, chosen to span a wide range of phonemic contrasts. Given only 10 hours of speech in a previously unseen language, systems must produce discrete units that are mapped to a predefined phoneme inventory, through either a many-to-one or a one-to-one assignment. The resulting sequences are evaluated for unit quality, recognition and segmentation. We provide four pretrained multilingual HuBERT and SpidR baselines, and show that phonemic information is available enough in current models for derived units to correlate well with phonemes, though with variations across languages.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.SD'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='eess.AS'/>\n    <published>2026-03-19T08:31:58Z</published>\n    <arxiv:comment>6 pages, 2 figures. Submitted to Interspeech 2026</arxiv:comment>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Maxime Poli</name>\n    </author>\n    <author>\n      <name>Manel Khentout</name>\n    </author>\n    <author>\n      <name>Angelo Ortiz Tandazo</name>\n    </author>\n    <author>\n      <name>Ewan Dunbar</name>\n    </author>\n    <author>\n      <name>Emmanuel Chemla</name>\n    </author>\n    <author>\n      <name>Emmanuel Dupoux</name>\n    </author>\n  </entry>"
}