Research

Paper

TESTING March 02, 2026

AdaPonderLM: Gated Pondering Language Models with Token-Wise Adaptive Depth

Authors

Shixiang Song, He Li, Zitong Wang, Boyi Zeng, Feichen Song, Yixuan Wang, Zhiqin John Xu, Ziwei He, Zhouhan Lin

Abstract

Test-time scaling via recurrent/iterative Transformers enables large language models to spend more computation at inference, but most pretrained recurrent LMs run a fixed number of iterations, wasting compute on easy tokens and lacking token-wise adaptivity. Following the core idea of Adaptive Computation Time(ACT) and Early Exit(EE), we propose AdaPonderLM, a self-supervised recurrent language model that learns token-wise early exiting during pretraining without manually tuned per-token/per-layer pruning ratios. AdaPonderLM uses iteration-specific MLP gates with a monotonic halting mask to decide when each token stops recurring, and introduces a KV reuse mechanism that reuses cached key/value states for halted tokens, ensuring train--test consistency and practical acceleration. Across Pythia backbones from 70M to 410M (pretraining) and up to 2.8B (continued pretraining), AdaPonderLM reduces inference compute at about 10% while maintaining comparable language modeling perplexity and competitive downstream accuracy. Our analysis shows the learned gates allocate more computation to high-NLL (hard) tokens, exhibiting adaptive computation time behavior in a fully self-supervised setting. Meanwhile, under iso-FLOPs, the learned halting policy consistently outperforms fixed pruning, showing AdaPonderLM allocates compute to the right tokens rather than just reducing average depth.

Metadata

arXiv ID: 2603.01914
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-02
Fetched: 2026-03-03 04:34

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.01914v1</id>\n    <title>AdaPonderLM: Gated Pondering Language Models with Token-Wise Adaptive Depth</title>\n    <updated>2026-03-02T14:28:16Z</updated>\n    <link href='https://arxiv.org/abs/2603.01914v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.01914v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Test-time scaling via recurrent/iterative Transformers enables large language models to spend more computation at inference, but most pretrained recurrent LMs run a fixed number of iterations, wasting compute on easy tokens and lacking token-wise adaptivity. Following the core idea of Adaptive Computation Time(ACT) and Early Exit(EE), we propose AdaPonderLM, a self-supervised recurrent language model that learns token-wise early exiting during pretraining without manually tuned per-token/per-layer pruning ratios. AdaPonderLM uses iteration-specific MLP gates with a monotonic halting mask to decide when each token stops recurring, and introduces a KV reuse mechanism that reuses cached key/value states for halted tokens, ensuring train--test consistency and practical acceleration. Across Pythia backbones from 70M to 410M (pretraining) and up to 2.8B (continued pretraining), AdaPonderLM reduces inference compute at about 10% while maintaining comparable language modeling perplexity and competitive downstream accuracy. Our analysis shows the learned gates allocate more computation to high-NLL (hard) tokens, exhibiting adaptive computation time behavior in a fully self-supervised setting. Meanwhile, under iso-FLOPs, the learned halting policy consistently outperforms fixed pruning, showing AdaPonderLM allocates compute to the right tokens rather than just reducing average depth.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-02T14:28:16Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Shixiang Song</name>\n    </author>\n    <author>\n      <name>He Li</name>\n    </author>\n    <author>\n      <name>Zitong Wang</name>\n    </author>\n    <author>\n      <name>Boyi Zeng</name>\n    </author>\n    <author>\n      <name>Feichen Song</name>\n    </author>\n    <author>\n      <name>Yixuan Wang</name>\n    </author>\n    <author>\n      <name>Zhiqin John Xu</name>\n    </author>\n    <author>\n      <name>Ziwei He</name>\n    </author>\n    <author>\n      <name>Zhouhan Lin</name>\n    </author>\n  </entry>"
}