Paper
Can RL Improve Generalization of LLM Agents? An Empirical Study
Authors
Zhiheng Xi, Xin Guo, Jiaqi Liu, Jiazheng Zhang, Yutao Fan, Zhihao Zhang, Shichun Liu, Mingxu Chai, Xiaowei Shi, Yitao Zhai, Xunliang Cai, Tao Gui, Qi Zhang, Xuanjing Huang
Abstract
Reinforcement fine-tuning (RFT) has shown promise for training LLM agents to perform multi-turn decision-making based on environment feedback. However, most existing evaluations remain largely in-domain: training and testing are conducted in the same environment or even on the same tasks. In real-world deployment, agents may operate in unseen environments with different background knowledge, observation spaces, and action interfaces. To characterize the generalization profile of RFT under such shifts, we conduct a systematic study along three axes: (1) within-environment generalization across task difficulty, (2) cross-environment transfer to unseen environments, and (3) sequential multi-environment training to quantify transfer and forgetting. Our results show that RFT generalizes well across task difficulty within an environment, but exhibits weaker transfer to unseen environments, which correlates with shifts in both semantic priors and observation/action interfaces. In contrast, sequential training yields promising downstream gains with minimal upstream forgetting, and mixture training across environments improves the overall balance. We further provide detailed analyses and deeper insights, and hope our work helps the community develop and deploy generalizable LLM agents.
Metadata
Related papers
Fractal universe and quantum gravity made simple
Fabio Briscese, Gianluca Calcagni • 2026-03-25
POLY-SIM: Polyglot Speaker Identification with Missing Modality Grand Challenge 2026 Evaluation Plan
Marta Moscati, Muhammad Saad Saeed, Marina Zanoni, Mubashir Noman, Rohan Kuma... • 2026-03-25
LensWalk: Agentic Video Understanding by Planning How You See in Videos
Keliang Li, Yansong Li, Hongze Shen, Mengdi Liu, Hong Chang, Shiguang Shan • 2026-03-25
Orientation Reconstruction of Proteins using Coulomb Explosions
Tomas André, Alfredo Bellisario, Nicusor Timneanu, Carl Caleman • 2026-03-25
The role of spatial context and multitask learning in the detection of organic and conventional farming systems based on Sentinel-2 time series
Jan Hemmerling, Marcel Schwieder, Philippe Rufin, Leon-Friedrich Thomas, Mire... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.12011v1</id>\n <title>Can RL Improve Generalization of LLM Agents? An Empirical Study</title>\n <updated>2026-03-12T14:54:59Z</updated>\n <link href='https://arxiv.org/abs/2603.12011v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.12011v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>Reinforcement fine-tuning (RFT) has shown promise for training LLM agents to perform multi-turn decision-making based on environment feedback. However, most existing evaluations remain largely in-domain: training and testing are conducted in the same environment or even on the same tasks. In real-world deployment, agents may operate in unseen environments with different background knowledge, observation spaces, and action interfaces. To characterize the generalization profile of RFT under such shifts, we conduct a systematic study along three axes: (1) within-environment generalization across task difficulty, (2) cross-environment transfer to unseen environments, and (3) sequential multi-environment training to quantify transfer and forgetting. Our results show that RFT generalizes well across task difficulty within an environment, but exhibits weaker transfer to unseen environments, which correlates with shifts in both semantic priors and observation/action interfaces. In contrast, sequential training yields promising downstream gains with minimal upstream forgetting, and mixture training across environments improves the overall balance. We further provide detailed analyses and deeper insights, and hope our work helps the community develop and deploy generalizable LLM agents.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n <published>2026-03-12T14:54:59Z</published>\n <arxiv:comment>Preprint, under review</arxiv:comment>\n <arxiv:primary_category term='cs.AI'/>\n <author>\n <name>Zhiheng Xi</name>\n </author>\n <author>\n <name>Xin Guo</name>\n </author>\n <author>\n <name>Jiaqi Liu</name>\n </author>\n <author>\n <name>Jiazheng Zhang</name>\n </author>\n <author>\n <name>Yutao Fan</name>\n </author>\n <author>\n <name>Zhihao Zhang</name>\n </author>\n <author>\n <name>Shichun Liu</name>\n </author>\n <author>\n <name>Mingxu Chai</name>\n </author>\n <author>\n <name>Xiaowei Shi</name>\n </author>\n <author>\n <name>Yitao Zhai</name>\n </author>\n <author>\n <name>Xunliang Cai</name>\n </author>\n <author>\n <name>Tao Gui</name>\n </author>\n <author>\n <name>Qi Zhang</name>\n </author>\n <author>\n <name>Xuanjing Huang</name>\n </author>\n </entry>"
}