Research

Paper

AI LLM March 13, 2026

Towards unified brain-to-text decoding across speech production and perception

Authors

Zhizhang Yuan, Yang Yang, Gaorui Zhang, Baowen Cheng, Zehan Wu, Yuhao Xu, Xiaoying Liu, Liang Chen, Ying Mao, Meng Li

Abstract

Speech production and perception are the main ways humans communicate daily. Prior brain-to-text decoding studies have largely focused on a single modality and alphabetic languages. Here, we present a unified brain-to-sentence decoding framework for both speech production and perception in Mandarin Chinese. The framework exhibits strong generalization ability, enabling sentence-level decoding when trained only on single-character data and supporting characters and syllables unseen during training. In addition, it allows direct and controlled comparison of neural dynamics across modalities. Mandarin speech is decoded by first classifying syllable components in Hanyu Pinyin, namely initials and finals, from neural signals, followed by a post-trained large language model (LLM) that maps sequences of toneless Pinyin syllables to Chinese sentences. To enhance LLM decoding, we designed a three-stage post-training and two-stage inference framework based on a 7-billion-parameter LLM, achieving overall performance that exceeds larger commercial LLMs with hundreds of billions of parameters or more. In addition, several characteristics were observed in Mandarin speech production and perception: speech production involved neural responses across broader cortical regions than auditory perception; channels responsive to both modalities exhibited similar activity patterns, with speech perception showing a temporal delay relative to production; and decoding performance was broadly comparable across hemispheres. Our work not only establishes the feasibility of a unified decoding framework but also provides insights into the neural characteristics of Mandarin speech production and perception. These advances contribute to brain-to-text decoding in logosyllabic languages and pave the way toward neural language decoding systems supporting multiple modalities.

Metadata

arXiv ID: 2603.12628
Provider: ARXIV
Primary Category: q-bio.NC
Published: 2026-03-13
Fetched: 2026-03-16 06:01

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.12628v1</id>\n    <title>Towards unified brain-to-text decoding across speech production and perception</title>\n    <updated>2026-03-13T03:59:42Z</updated>\n    <link href='https://arxiv.org/abs/2603.12628v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.12628v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Speech production and perception are the main ways humans communicate daily. Prior brain-to-text decoding studies have largely focused on a single modality and alphabetic languages. Here, we present a unified brain-to-sentence decoding framework for both speech production and perception in Mandarin Chinese. The framework exhibits strong generalization ability, enabling sentence-level decoding when trained only on single-character data and supporting characters and syllables unseen during training. In addition, it allows direct and controlled comparison of neural dynamics across modalities. Mandarin speech is decoded by first classifying syllable components in Hanyu Pinyin, namely initials and finals, from neural signals, followed by a post-trained large language model (LLM) that maps sequences of toneless Pinyin syllables to Chinese sentences. To enhance LLM decoding, we designed a three-stage post-training and two-stage inference framework based on a 7-billion-parameter LLM, achieving overall performance that exceeds larger commercial LLMs with hundreds of billions of parameters or more. In addition, several characteristics were observed in Mandarin speech production and perception: speech production involved neural responses across broader cortical regions than auditory perception; channels responsive to both modalities exhibited similar activity patterns, with speech perception showing a temporal delay relative to production; and decoding performance was broadly comparable across hemispheres. Our work not only establishes the feasibility of a unified decoding framework but also provides insights into the neural characteristics of Mandarin speech production and perception. These advances contribute to brain-to-text decoding in logosyllabic languages and pave the way toward neural language decoding systems supporting multiple modalities.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='q-bio.NC'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='eess.SP'/>\n    <published>2026-03-13T03:59:42Z</published>\n    <arxiv:comment>37 pages, 9 figures</arxiv:comment>\n    <arxiv:primary_category term='q-bio.NC'/>\n    <author>\n      <name>Zhizhang Yuan</name>\n    </author>\n    <author>\n      <name>Yang Yang</name>\n    </author>\n    <author>\n      <name>Gaorui Zhang</name>\n    </author>\n    <author>\n      <name>Baowen Cheng</name>\n    </author>\n    <author>\n      <name>Zehan Wu</name>\n    </author>\n    <author>\n      <name>Yuhao Xu</name>\n    </author>\n    <author>\n      <name>Xiaoying Liu</name>\n    </author>\n    <author>\n      <name>Liang Chen</name>\n    </author>\n    <author>\n      <name>Ying Mao</name>\n    </author>\n    <author>\n      <name>Meng Li</name>\n    </author>\n  </entry>"
}