Research

Paper

AI LLM February 26, 2026

Towards Better RL Training Data Utilization via Second-Order Rollout

Authors

Zhe Yang, Yudong Wang, Rang Li, Zhifang Sui

Abstract

Reinforcement Learning (RL) has empowered Large Language Models (LLMs) with strong reasoning capabilities, but vanilla RL mainly focuses on generation capability improvement by training with only first-order rollout (generating multiple responses for a question), and we argue that this approach fails to fully exploit the potential of training data because of the neglect of critique capability training. To tackle this problem, we further introduce the concept of second-order rollout (generating multiple critiques for a response) and propose a unified framework for jointly training generation and critique capabilities. Extensive experiments across various models and datasets demonstrate that our approach can utilize training data more effectively than vanilla RL and achieve better performance under the same training data. Additionally, we uncover several insightful findings regarding second-order rollout and critique training, such as the importance of label balance in critique training and the noise problem of outcome-based rewards, which can be mitigated through sampling techniques. Our work offers a preliminary exploration of dynamic data augmentation and joint generation-critique training in RL, providing meaningful inspiration for the further advancement of RL training

Metadata

arXiv ID: 2602.22765
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-02-26
Fetched: 2026-02-27 04:35

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.22765v1</id>\n    <title>Towards Better RL Training Data Utilization via Second-Order Rollout</title>\n    <updated>2026-02-26T08:55:58Z</updated>\n    <link href='https://arxiv.org/abs/2602.22765v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.22765v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Reinforcement Learning (RL) has empowered Large Language Models (LLMs) with strong reasoning capabilities, but vanilla RL mainly focuses on generation capability improvement by training with only first-order rollout (generating multiple responses for a question), and we argue that this approach fails to fully exploit the potential of training data because of the neglect of critique capability training. To tackle this problem, we further introduce the concept of second-order rollout (generating multiple critiques for a response) and propose a unified framework for jointly training generation and critique capabilities. Extensive experiments across various models and datasets demonstrate that our approach can utilize training data more effectively than vanilla RL and achieve better performance under the same training data. Additionally, we uncover several insightful findings regarding second-order rollout and critique training, such as the importance of label balance in critique training and the noise problem of outcome-based rewards, which can be mitigated through sampling techniques. Our work offers a preliminary exploration of dynamic data augmentation and joint generation-critique training in RL, providing meaningful inspiration for the further advancement of RL training</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-02-26T08:55:58Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Zhe Yang</name>\n    </author>\n    <author>\n      <name>Yudong Wang</name>\n    </author>\n    <author>\n      <name>Rang Li</name>\n    </author>\n    <author>\n      <name>Zhifang Sui</name>\n    </author>\n  </entry>"
}