Research

Paper

AI LLM March 06, 2026

Partial Policy Gradients for RL in LLMs

Authors

Puneet Mathur, Branislav Kveton, Subhojyoti Mukherjee, Viet Dac Lai

Abstract

Reinforcement learning is a framework for learning to act sequentially in an unknown environment. We propose a natural approach for modeling policy structure in policy gradients. The key idea is to optimize for a subset of future rewards: smaller subsets represent simpler policies, which can be learned more reliably because their empirical gradient estimates are more accurate. Our approach allows for modeling and comparison of different policy classes, including full planning, greedy, K-step lookahead, and segment policies. We evaluate the policies empirically on multiple persona-alignment conversational problems. Different policies excel in different problems, reflecting their different characteristics and highlighting the importance of our studied policy class.

Metadata

arXiv ID: 2603.06138
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-06
Fetched: 2026-03-09 06:05

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.06138v1</id>\n    <title>Partial Policy Gradients for RL in LLMs</title>\n    <updated>2026-03-06T10:47:41Z</updated>\n    <link href='https://arxiv.org/abs/2603.06138v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.06138v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Reinforcement learning is a framework for learning to act sequentially in an unknown environment. We propose a natural approach for modeling policy structure in policy gradients. The key idea is to optimize for a subset of future rewards: smaller subsets represent simpler policies, which can be learned more reliably because their empirical gradient estimates are more accurate. Our approach allows for modeling and comparison of different policy classes, including full planning, greedy, K-step lookahead, and segment policies. We evaluate the policies empirically on multiple persona-alignment conversational problems. Different policies excel in different problems, reflecting their different characteristics and highlighting the importance of our studied policy class.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-06T10:47:41Z</published>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Puneet Mathur</name>\n    </author>\n    <author>\n      <name>Branislav Kveton</name>\n    </author>\n    <author>\n      <name>Subhojyoti Mukherjee</name>\n    </author>\n    <author>\n      <name>Viet Dac Lai</name>\n    </author>\n  </entry>"
}