Research

Paper

AI LLM March 20, 2026

Semantic Token Clustering for Efficient Uncertainty Quantification in Large Language Models

Authors

Qi Cao, Andrew Gambardella, Takeshi Kojima, Yutaka Matsuo, Yusuke Iwasawa

Abstract

Large language models (LLMs) have demonstrated remarkable capabilities across diverse tasks. However, the truthfulness of their outputs is not guaranteed, and their tendency toward overconfidence further limits reliability. Uncertainty quantification offers a promising way to identify potentially unreliable outputs, but most existing methods rely on repeated sampling or auxiliary models, introducing substantial computational overhead. To address these limitations, we propose Semantic Token Clustering (STC), an efficient uncertainty quantification method that leverages the semantic information inherently encoded in LLMs. Specifically, we group tokens into semantically consistent clusters using embedding clustering and prefix matching, and quantify uncertainty based on the probability mass aggregated over the corresponding semantic cluster. Our approach requires only a single generation and does not depend on auxiliary models. Experimental results show that STC achieves performance comparable to state-of-the-art baselines while substantially reducing computational overhead.

Metadata

arXiv ID: 2603.20161
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-20
Fetched: 2026-03-23 16:54

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.20161v1</id>\n    <title>Semantic Token Clustering for Efficient Uncertainty Quantification in Large Language Models</title>\n    <updated>2026-03-20T17:37:33Z</updated>\n    <link href='https://arxiv.org/abs/2603.20161v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.20161v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large language models (LLMs) have demonstrated remarkable capabilities across diverse tasks. However, the truthfulness of their outputs is not guaranteed, and their tendency toward overconfidence further limits reliability. Uncertainty quantification offers a promising way to identify potentially unreliable outputs, but most existing methods rely on repeated sampling or auxiliary models, introducing substantial computational overhead. To address these limitations, we propose Semantic Token Clustering (STC), an efficient uncertainty quantification method that leverages the semantic information inherently encoded in LLMs. Specifically, we group tokens into semantically consistent clusters using embedding clustering and prefix matching, and quantify uncertainty based on the probability mass aggregated over the corresponding semantic cluster. Our approach requires only a single generation and does not depend on auxiliary models. Experimental results show that STC achieves performance comparable to state-of-the-art baselines while substantially reducing computational overhead.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-03-20T17:37:33Z</published>\n    <arxiv:comment>EACL 2026</arxiv:comment>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Qi Cao</name>\n    </author>\n    <author>\n      <name>Andrew Gambardella</name>\n    </author>\n    <author>\n      <name>Takeshi Kojima</name>\n    </author>\n    <author>\n      <name>Yutaka Matsuo</name>\n    </author>\n    <author>\n      <name>Yusuke Iwasawa</name>\n    </author>\n  </entry>"
}