Research

Paper

AI LLM March 06, 2026

VerChol -- Grammar-First Tokenization for Agglutinative Languages

Authors

Prabhu Raja

Abstract

Tokenization is the foundational step in all large language model (LLM) pipelines, yet the dominant approach Byte Pair Encoding (BPE) and its variants is inherently script agnostic and optimized for English like morphology. For agglutinative languages a typological class encompassing the Dravidian family (Tamil, Kannada, Telugu, Malayalam), Turkic languages (Turkish, Azerbaijani, Uzbek), Uralic languages (Finnish, Hungarian, Estonian), Korean, Japanese, Swahili, Basque, and others, a single word may encode root, tense, aspect, person, number, gender agreement, case, and postpositions into one orthographic unit. Statistical tokenizers fragment these words into byte pair chunks that sever morpheme boundaries and inflate token counts.

Metadata

arXiv ID: 2603.05883
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-06
Fetched: 2026-03-09 06:05

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.05883v1</id>\n    <title>VerChol -- Grammar-First Tokenization for Agglutinative Languages</title>\n    <updated>2026-03-06T04:07:15Z</updated>\n    <link href='https://arxiv.org/abs/2603.05883v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.05883v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Tokenization is the foundational step in all large language model (LLM) pipelines, yet the dominant approach Byte Pair Encoding (BPE) and its variants is inherently script agnostic and optimized for English like morphology. For agglutinative languages a typological class encompassing the Dravidian family (Tamil, Kannada, Telugu, Malayalam), Turkic languages (Turkish, Azerbaijani, Uzbek), Uralic languages (Finnish, Hungarian, Estonian), Korean, Japanese, Swahili, Basque, and others, a single word may encode root, tense, aspect, person, number, gender agreement, case, and postpositions into one orthographic unit. Statistical tokenizers fragment these words into byte pair chunks that sever morpheme boundaries and inflate token counts.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-06T04:07:15Z</published>\n    <arxiv:comment>13 pages. A Morphological Alternative to Statistical Subword Tokenization</arxiv:comment>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Prabhu Raja</name>\n    </author>\n  </entry>"
}