Paper
IsoCompute Playbook: Optimally Scaling Sampling Compute for LLM RL
Authors
Zhoujun Cheng, Yutao Xie, Yuxiao Qu, Amrith Setlur, Shibo Hao, Varad Pimpalkhute, Tongtong Liang, Feng Yao, Zhengzhong Liu, Eric Xing, Virginia Smith, Ruslan Salakhutdinov, Zhiting Hu, Taylor Killian, Aviral Kumar
Abstract
While scaling laws guide compute allocation for LLM pre-training, analogous prescriptions for reinforcement learning (RL) post-training of large language models (LLMs) remain poorly understood. We study the compute-optimal allocation of sampling compute for on-policy RL methods in LLMs, framing scaling as a compute-constrained optimization over three resources: parallel rollouts per problem, number of problems per batch, and number of update steps. We find that the compute-optimal number of parallel rollouts per problem increases predictably with compute budget and then saturates. This trend holds across both easy and hard problems, though driven by different mechanisms: solution sharpening on easy problems and coverage expansion on hard problems. We further show that increasing the number of parallel rollouts mitigates interference across problems, while the number of problems per batch primarily affects training stability and can be chosen within a broad range. Validated across base models and data distributions, our results recast RL scaling laws as prescriptive allocation rules and provide practical guidance for compute-efficient LLM RL post-training.
Metadata
Related papers
Gen-Searcher: Reinforcing Agentic Search for Image Generation
Kaituo Feng, Manyuan Zhang, Shuang Chen, Yunlong Lin, Kaixuan Fan, Yilei Jian... • 2026-03-30
On-the-fly Repulsion in the Contextual Space for Rich Diversity in Diffusion Transformers
Omer Dahary, Benaya Koren, Daniel Garibi, Daniel Cohen-Or • 2026-03-30
Graphilosophy: Graph-Based Digital Humanities Computing with The Four Books
Minh-Thu Do, Quynh-Chau Le-Tran, Duc-Duy Nguyen-Mai, Thien-Trang Nguyen, Khan... • 2026-03-30
ParaSpeechCLAP: A Dual-Encoder Speech-Text Model for Rich Stylistic Language-Audio Pretraining
Anuj Diwan, Eunsol Choi, David Harwath • 2026-03-30
RAD-AI: Rethinking Architecture Documentation for AI-Augmented Ecosystems
Oliver Aleksander Larsen, Mahyar T. Moghaddam • 2026-03-30
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.12151v1</id>\n <title>IsoCompute Playbook: Optimally Scaling Sampling Compute for LLM RL</title>\n <updated>2026-03-12T16:49:21Z</updated>\n <link href='https://arxiv.org/abs/2603.12151v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.12151v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>While scaling laws guide compute allocation for LLM pre-training, analogous prescriptions for reinforcement learning (RL) post-training of large language models (LLMs) remain poorly understood. We study the compute-optimal allocation of sampling compute for on-policy RL methods in LLMs, framing scaling as a compute-constrained optimization over three resources: parallel rollouts per problem, number of problems per batch, and number of update steps. We find that the compute-optimal number of parallel rollouts per problem increases predictably with compute budget and then saturates. This trend holds across both easy and hard problems, though driven by different mechanisms: solution sharpening on easy problems and coverage expansion on hard problems. We further show that increasing the number of parallel rollouts mitigates interference across problems, while the number of problems per batch primarily affects training stability and can be chosen within a broad range. Validated across base models and data distributions, our results recast RL scaling laws as prescriptive allocation rules and provide practical guidance for compute-efficient LLM RL post-training.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n <published>2026-03-12T16:49:21Z</published>\n <arxiv:comment>29 pages, 27 figures. Under review</arxiv:comment>\n <arxiv:primary_category term='cs.LG'/>\n <author>\n <name>Zhoujun Cheng</name>\n </author>\n <author>\n <name>Yutao Xie</name>\n </author>\n <author>\n <name>Yuxiao Qu</name>\n </author>\n <author>\n <name>Amrith Setlur</name>\n </author>\n <author>\n <name>Shibo Hao</name>\n </author>\n <author>\n <name>Varad Pimpalkhute</name>\n </author>\n <author>\n <name>Tongtong Liang</name>\n </author>\n <author>\n <name>Feng Yao</name>\n </author>\n <author>\n <name>Zhengzhong Liu</name>\n </author>\n <author>\n <name>Eric Xing</name>\n </author>\n <author>\n <name>Virginia Smith</name>\n </author>\n <author>\n <name>Ruslan Salakhutdinov</name>\n </author>\n <author>\n <name>Zhiting Hu</name>\n </author>\n <author>\n <name>Taylor Killian</name>\n </author>\n <author>\n <name>Aviral Kumar</name>\n </author>\n </entry>"
}