Research

Paper

AI LLM March 20, 2026

Experience is the Best Teacher: Motivating Effective Exploration in Reinforcement Learning for LLMs

Authors

Wenjian Zhang, Kongcheng Zhang, Jiaxin Qi, Baisheng Lai, Jianqiang Huang

Abstract

Reinforcement Learning (RL) with rubric-based rewards has recently shown remarkable progress in enhancing general reasoning capabilities of Large Language Models (LLMs), yet still suffers from ineffective exploration confined to curent policy distribution. In fact, RL optimization can be viewed as steering the policy toward an ideal distribution that maximizes the rewards, while effective exploration should align efforts with desired target. Leveraging this insight, we propose HeRL, a Hindsight experience guided Reinforcement Learning framework to bootstrap effective exploration by explicitly telling LLMs the desired behaviors specified in rewards. Concretely, HeRL treats failed trajectories along with their unmet rubrics as hindsight experience, which serves as in-context guidance for the policy to explore desired responses beyond its current distribution. Additionally, we introduce a bonus reward to incentivize responses with greater potential for improvement under such guidance. HeRL facilitates effective learning from desired high quality samples without repeated trial-and-error from scratch, yielding a more accurate estimation of the expected gradient theoretically. Extensive experiments across various benchmarks demonstrate that HeRL achieves superior performance gains over baselines, and can further benefit from experience guided self-improvement at test time. Our code is available at https://github.com/sikelifei/HeRL.

Metadata

arXiv ID: 2603.20046
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-03-20
Fetched: 2026-03-23 16:54

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.20046v1</id>\n    <title>Experience is the Best Teacher: Motivating Effective Exploration in Reinforcement Learning for LLMs</title>\n    <updated>2026-03-20T15:32:06Z</updated>\n    <link href='https://arxiv.org/abs/2603.20046v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.20046v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Reinforcement Learning (RL) with rubric-based rewards has recently shown remarkable progress in enhancing general reasoning capabilities of Large Language Models (LLMs), yet still suffers from ineffective exploration confined to curent policy distribution. In fact, RL optimization can be viewed as steering the policy toward an ideal distribution that maximizes the rewards, while effective exploration should align efforts with desired target. Leveraging this insight, we propose HeRL, a Hindsight experience guided Reinforcement Learning framework to bootstrap effective exploration by explicitly telling LLMs the desired behaviors specified in rewards. Concretely, HeRL treats failed trajectories along with their unmet rubrics as hindsight experience, which serves as in-context guidance for the policy to explore desired responses beyond its current distribution. Additionally, we introduce a bonus reward to incentivize responses with greater potential for improvement under such guidance. HeRL facilitates effective learning from desired high quality samples without repeated trial-and-error from scratch, yielding a more accurate estimation of the expected gradient theoretically. Extensive experiments across various benchmarks demonstrate that HeRL achieves superior performance gains over baselines, and can further benefit from experience guided self-improvement at test time. Our code is available at https://github.com/sikelifei/HeRL.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-20T15:32:06Z</published>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Wenjian Zhang</name>\n    </author>\n    <author>\n      <name>Kongcheng Zhang</name>\n    </author>\n    <author>\n      <name>Jiaxin Qi</name>\n    </author>\n    <author>\n      <name>Baisheng Lai</name>\n    </author>\n    <author>\n      <name>Jianqiang Huang</name>\n    </author>\n  </entry>"
}