Research

Paper

AI LLM March 11, 2026

Speech Codec Probing from Semantic and Phonetic Perspectives

Authors

Xuan Shi, Chang Zeng, Tiantian Feng, Shih-Heng Wang, Jianbo Ma, Shrikanth Narayanan

Abstract

Speech tokenizers are essential for connecting speech to large language models (LLMs) in multimodal systems. These tokenizers are expected to preserve both semantic and acoustic information for downstream understanding and generation. However, emerging evidence suggests that what is termed "semantic" in speech representations does not align with text-derived semantics: a mismatch that can degrade multimodal LLM performance. In this paper, we systematically analyze the information encoded by several widely used speech tokenizers, disentangling their semantic and phonetic content through word-level probing tasks, layerwise representation analysis, and cross-modal alignment metrics such as CKA. Our results show that current tokenizers primarily capture phonetic rather than lexical-semantic structure, and we derive practical implications for the design of next-generation speech tokenization methods.

Metadata

arXiv ID: 2603.10371
Provider: ARXIV
Primary Category: eess.AS
Published: 2026-03-11
Fetched: 2026-03-12 04:21

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.10371v1</id>\n    <title>Speech Codec Probing from Semantic and Phonetic Perspectives</title>\n    <updated>2026-03-11T03:32:25Z</updated>\n    <link href='https://arxiv.org/abs/2603.10371v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.10371v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Speech tokenizers are essential for connecting speech to large language models (LLMs) in multimodal systems. These tokenizers are expected to preserve both semantic and acoustic information for downstream understanding and generation. However, emerging evidence suggests that what is termed \"semantic\" in speech representations does not align with text-derived semantics: a mismatch that can degrade multimodal LLM performance. In this paper, we systematically analyze the information encoded by several widely used speech tokenizers, disentangling their semantic and phonetic content through word-level probing tasks, layerwise representation analysis, and cross-modal alignment metrics such as CKA. Our results show that current tokenizers primarily capture phonetic rather than lexical-semantic structure, and we derive practical implications for the design of next-generation speech tokenization methods.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='eess.AS'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-11T03:32:25Z</published>\n    <arxiv:primary_category term='eess.AS'/>\n    <author>\n      <name>Xuan Shi</name>\n    </author>\n    <author>\n      <name>Chang Zeng</name>\n    </author>\n    <author>\n      <name>Tiantian Feng</name>\n    </author>\n    <author>\n      <name>Shih-Heng Wang</name>\n    </author>\n    <author>\n      <name>Jianbo Ma</name>\n    </author>\n    <author>\n      <name>Shrikanth Narayanan</name>\n    </author>\n  </entry>"
}