Research

Paper

AI LLM March 12, 2026

IsoCompute Playbook: Optimally Scaling Sampling Compute for LLM RL

Authors

Zhoujun Cheng, Yutao Xie, Yuxiao Qu, Amrith Setlur, Shibo Hao, Varad Pimpalkhute, Tongtong Liang, Feng Yao, Zhengzhong Liu, Eric Xing, Virginia Smith, Ruslan Salakhutdinov, Zhiting Hu, Taylor Killian, Aviral Kumar

Abstract

While scaling laws guide compute allocation for LLM pre-training, analogous prescriptions for reinforcement learning (RL) post-training of large language models (LLMs) remain poorly understood. We study the compute-optimal allocation of sampling compute for on-policy RL methods in LLMs, framing scaling as a compute-constrained optimization over three resources: parallel rollouts per problem, number of problems per batch, and number of update steps. We find that the compute-optimal number of parallel rollouts per problem increases predictably with compute budget and then saturates. This trend holds across both easy and hard problems, though driven by different mechanisms: solution sharpening on easy problems and coverage expansion on hard problems. We further show that increasing the number of parallel rollouts mitigates interference across problems, while the number of problems per batch primarily affects training stability and can be chosen within a broad range. Validated across base models and data distributions, our results recast RL scaling laws as prescriptive allocation rules and provide practical guidance for compute-efficient LLM RL post-training.

Metadata

arXiv ID: 2603.12151
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-12
Fetched: 2026-03-14 05:03

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.12151v1</id>\n    <title>IsoCompute Playbook: Optimally Scaling Sampling Compute for LLM RL</title>\n    <updated>2026-03-12T16:49:21Z</updated>\n    <link href='https://arxiv.org/abs/2603.12151v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.12151v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>While scaling laws guide compute allocation for LLM pre-training, analogous prescriptions for reinforcement learning (RL) post-training of large language models (LLMs) remain poorly understood. We study the compute-optimal allocation of sampling compute for on-policy RL methods in LLMs, framing scaling as a compute-constrained optimization over three resources: parallel rollouts per problem, number of problems per batch, and number of update steps. We find that the compute-optimal number of parallel rollouts per problem increases predictably with compute budget and then saturates. This trend holds across both easy and hard problems, though driven by different mechanisms: solution sharpening on easy problems and coverage expansion on hard problems. We further show that increasing the number of parallel rollouts mitigates interference across problems, while the number of problems per batch primarily affects training stability and can be chosen within a broad range. Validated across base models and data distributions, our results recast RL scaling laws as prescriptive allocation rules and provide practical guidance for compute-efficient LLM RL post-training.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-12T16:49:21Z</published>\n    <arxiv:comment>29 pages, 27 figures. Under review</arxiv:comment>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Zhoujun Cheng</name>\n    </author>\n    <author>\n      <name>Yutao Xie</name>\n    </author>\n    <author>\n      <name>Yuxiao Qu</name>\n    </author>\n    <author>\n      <name>Amrith Setlur</name>\n    </author>\n    <author>\n      <name>Shibo Hao</name>\n    </author>\n    <author>\n      <name>Varad Pimpalkhute</name>\n    </author>\n    <author>\n      <name>Tongtong Liang</name>\n    </author>\n    <author>\n      <name>Feng Yao</name>\n    </author>\n    <author>\n      <name>Zhengzhong Liu</name>\n    </author>\n    <author>\n      <name>Eric Xing</name>\n    </author>\n    <author>\n      <name>Virginia Smith</name>\n    </author>\n    <author>\n      <name>Ruslan Salakhutdinov</name>\n    </author>\n    <author>\n      <name>Zhiting Hu</name>\n    </author>\n    <author>\n      <name>Taylor Killian</name>\n    </author>\n    <author>\n      <name>Aviral Kumar</name>\n    </author>\n  </entry>"
}