Research

Paper

TESTING March 19, 2026

Adaptive Decoding via Test-Time Policy Learning for Self-Improving Generation

Authors

Asmita Bhardwaj, Yuya Jeremy Ong, Eelaaf Zahid, Basel Shbita

Abstract

Decoding strategies largely determine the quality of Large Language Model (LLM) outputs, yet widely used heuristics such as greedy or fixed temperature/top-p decoding are static and often task-agnostic, leading to suboptimal or inconsistent generation quality across domains that demand stylistic or structural flexibility. We introduce a reinforcement learning-based decoder sampler that treats decoding as sequential decision-making and learns a lightweight policy to adjust sampling parameters at test-time while keeping LLM weights frozen. We evaluated summarization datasets including BookSum, arXiv, and WikiHow using Granite-3.3-2B and Qwen-2.5-0.5B. Our policy sampler consistently outperforms greedy and static baselines, achieving relative gains of up to +88% (BookSum, Granite) and +79% (WikiHow, Qwen). Reward ablations show that overlap-only objectives underperform compared to composite rewards, while structured shaping terms (length, coverage, repetition, completeness) enable stable and sustained improvements. These findings highlight reinforcement learning as a practical mechanism for test-time adaptation in decoding, enabling domain-aware and user-controllable generation without retraining large models.

Metadata

arXiv ID: 2603.18428
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-19
Fetched: 2026-03-20 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.18428v1</id>\n    <title>Adaptive Decoding via Test-Time Policy Learning for Self-Improving Generation</title>\n    <updated>2026-03-19T02:44:42Z</updated>\n    <link href='https://arxiv.org/abs/2603.18428v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.18428v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Decoding strategies largely determine the quality of Large Language Model (LLM) outputs, yet widely used heuristics such as greedy or fixed temperature/top-p decoding are static and often task-agnostic, leading to suboptimal or inconsistent generation quality across domains that demand stylistic or structural flexibility. We introduce a reinforcement learning-based decoder sampler that treats decoding as sequential decision-making and learns a lightweight policy to adjust sampling parameters at test-time while keeping LLM weights frozen. We evaluated summarization datasets including BookSum, arXiv, and WikiHow using Granite-3.3-2B and Qwen-2.5-0.5B. Our policy sampler consistently outperforms greedy and static baselines, achieving relative gains of up to +88% (BookSum, Granite) and +79% (WikiHow, Qwen). Reward ablations show that overlap-only objectives underperform compared to composite rewards, while structured shaping terms (length, coverage, repetition, completeness) enable stable and sustained improvements. These findings highlight reinforcement learning as a practical mechanism for test-time adaptation in decoding, enabling domain-aware and user-controllable generation without retraining large models.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-19T02:44:42Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Asmita Bhardwaj</name>\n    </author>\n    <author>\n      <name>Yuya Jeremy Ong</name>\n    </author>\n    <author>\n      <name>Eelaaf Zahid</name>\n    </author>\n    <author>\n      <name>Basel Shbita</name>\n    </author>\n  </entry>"
}