Research

Paper

AI LLM February 25, 2026

GradAlign: Gradient-Aligned Data Selection for LLM Reinforcement Learning

Authors

Ningyuan Yang, Weihua Du, Weiwei Sun, Sean Welleck, Yiming Yang

Abstract

Reinforcement learning (RL) has become a central post-training paradigm for large language models (LLMs), but its performance is highly sensitive to the quality of training problems. This sensitivity stems from the non-stationarity of RL: rollouts are generated by an evolving policy, and learning is shaped by exploration and reward feedback, unlike supervised fine-tuning (SFT) with fixed trajectories. As a result, prior work often relies on manual curation or simple heuristic filters (e.g., accuracy), which can admit incorrect or low-utility problems. We propose GradAlign, a gradient-aligned data selection method for LLM reinforcement learning that uses a small, trusted validation set to prioritize training problems whose policy gradients align with validation gradients, yielding an adaptive curriculum. We evaluate GradAlign across three challenging data regimes: unreliable reward signals, distribution imbalance, and low-utility training corpus, showing that GradAlign consistently outperforms existing baselines, underscoring the importance of directional gradient signals in navigating non-stationary policy optimization and yielding more stable training and improved final performance. We release our implementation at https://github.com/StigLidu/GradAlign

Metadata

arXiv ID: 2602.21492
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-02-25
Fetched: 2026-02-26 05:00

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.21492v1</id>\n    <title>GradAlign: Gradient-Aligned Data Selection for LLM Reinforcement Learning</title>\n    <updated>2026-02-25T01:54:50Z</updated>\n    <link href='https://arxiv.org/abs/2602.21492v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.21492v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Reinforcement learning (RL) has become a central post-training paradigm for large language models (LLMs), but its performance is highly sensitive to the quality of training problems. This sensitivity stems from the non-stationarity of RL: rollouts are generated by an evolving policy, and learning is shaped by exploration and reward feedback, unlike supervised fine-tuning (SFT) with fixed trajectories. As a result, prior work often relies on manual curation or simple heuristic filters (e.g., accuracy), which can admit incorrect or low-utility problems. We propose GradAlign, a gradient-aligned data selection method for LLM reinforcement learning that uses a small, trusted validation set to prioritize training problems whose policy gradients align with validation gradients, yielding an adaptive curriculum. We evaluate GradAlign across three challenging data regimes: unreliable reward signals, distribution imbalance, and low-utility training corpus, showing that GradAlign consistently outperforms existing baselines, underscoring the importance of directional gradient signals in navigating non-stationary policy optimization and yielding more stable training and improved final performance. We release our implementation at https://github.com/StigLidu/GradAlign</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-02-25T01:54:50Z</published>\n    <arxiv:comment>14 pages. Preliminary work</arxiv:comment>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Ningyuan Yang</name>\n    </author>\n    <author>\n      <name>Weihua Du</name>\n    </author>\n    <author>\n      <name>Weiwei Sun</name>\n    </author>\n    <author>\n      <name>Sean Welleck</name>\n    </author>\n    <author>\n      <name>Yiming Yang</name>\n    </author>\n  </entry>"
}