Research

Paper

AI LLM March 25, 2026

Towards Effective Experiential Learning: Dual Guidance for Utilization and Internalization

Authors

Fei Bai, Zhipeng Chen, Chuan Hao, Ming Yang, Ran Tao, Bryan Dai, Wayne Xin Zhao, Jian Yang, Hongteng Xu

Abstract

Recently, reinforcement learning~(RL) has become an important approach for improving the capabilities of large language models~(LLMs). In particular, reinforcement learning from verifiable rewards~(RLVR) has emerged as a promising paradigm for reasoning tasks. However, existing RL-based training still remains only a rough approximation to human learning. Human learners leverage both external and internal experience to guide exploration and gradually internalize useful trajectories into stable knowledge. Motivated by this gap, we ask: how can LLMs better utilize and internalize experience during RLVR training? To answer this question, we propose \textbf{D}ual \textbf{G}uidance \textbf{O}ptimization~(\textbf{DGO}), a unified framework that leverages \emph{external} and \emph{internal experience} to improve training effectiveness. Specifically, DGO first constructs an experience bank from previously explored trajectories. The policy then performs exploration under the joint guidance of the experience bank and the model's internal knowledge. The resulting trajectories are further used to refine the experience bank and optimize model parameters, forming a closed loop of experience utilization and internalization. Experiments show that DGO consistently outperforms baseline methods, suggesting that better utilization and internalization of experience lead to more effective reasoning.

Metadata

arXiv ID: 2603.24093
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-25
Fetched: 2026-03-26 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.24093v1</id>\n    <title>Towards Effective Experiential Learning: Dual Guidance for Utilization and Internalization</title>\n    <updated>2026-03-25T08:52:56Z</updated>\n    <link href='https://arxiv.org/abs/2603.24093v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.24093v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Recently, reinforcement learning~(RL) has become an important approach for improving the capabilities of large language models~(LLMs). In particular, reinforcement learning from verifiable rewards~(RLVR) has emerged as a promising paradigm for reasoning tasks. However, existing RL-based training still remains only a rough approximation to human learning. Human learners leverage both external and internal experience to guide exploration and gradually internalize useful trajectories into stable knowledge. Motivated by this gap, we ask: how can LLMs better utilize and internalize experience during RLVR training? To answer this question, we propose \\textbf{D}ual \\textbf{G}uidance \\textbf{O}ptimization~(\\textbf{DGO}), a unified framework that leverages \\emph{external} and \\emph{internal experience} to improve training effectiveness. Specifically, DGO first constructs an experience bank from previously explored trajectories. The policy then performs exploration under the joint guidance of the experience bank and the model's internal knowledge. The resulting trajectories are further used to refine the experience bank and optimize model parameters, forming a closed loop of experience utilization and internalization. Experiments show that DGO consistently outperforms baseline methods, suggesting that better utilization and internalization of experience lead to more effective reasoning.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-25T08:52:56Z</published>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Fei Bai</name>\n    </author>\n    <author>\n      <name>Zhipeng Chen</name>\n    </author>\n    <author>\n      <name>Chuan Hao</name>\n    </author>\n    <author>\n      <name>Ming Yang</name>\n    </author>\n    <author>\n      <name>Ran Tao</name>\n    </author>\n    <author>\n      <name>Bryan Dai</name>\n    </author>\n    <author>\n      <name>Wayne Xin Zhao</name>\n    </author>\n    <author>\n      <name>Jian Yang</name>\n    </author>\n    <author>\n      <name>Hongteng Xu</name>\n    </author>\n  </entry>"
}