Research

Paper

AI LLM March 25, 2026

From AI Assistant to AI Scientist: Autonomous Discovery of LLM-RL Algorithms with LLM Agents

Authors

Sirui Xia, Yikai Zhang, Aili Chen, Siye Wu, Siyu Yuan, Yanghua Xiao

Abstract

Discovering improved policy optimization algorithms for language models remains a costly manual process requiring repeated mechanism-level modification and validation. Unlike simple combinatorial code search, this problem requires searching over algorithmic mechanisms tightly coupled with training dynamics while reusing empirical evidence across iterations. We propose POISE, a closed-loop framework for automated discovery of policy optimization algorithms for language models. POISE maintains a structured, genealogically linked archive linking proposals, executable implementations, standardized evaluations, and natural-language reflections to support evidence-driven iteration. In mathematical reasoning experiments starting from GRPO, POISE evaluates 64 candidate algorithms and discovers improved mechanisms, including analytic-variance scaling and validity masking. The best variant improves weighted Overall from 47.8 to 52.5 (+4.6) and increases AIME25 pass@32 from 26.7% to 43.3%, demonstrating the feasibility of automated policy optimization discovery while supporting interpretable design principles.

Metadata

arXiv ID: 2603.23951
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-25
Fetched: 2026-03-26 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.23951v1</id>\n    <title>From AI Assistant to AI Scientist: Autonomous Discovery of LLM-RL Algorithms with LLM Agents</title>\n    <updated>2026-03-25T05:19:23Z</updated>\n    <link href='https://arxiv.org/abs/2603.23951v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.23951v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Discovering improved policy optimization algorithms for language models remains a costly manual process requiring repeated mechanism-level modification and validation. Unlike simple combinatorial code search, this problem requires searching over algorithmic mechanisms tightly coupled with training dynamics while reusing empirical evidence across iterations. We propose POISE, a closed-loop framework for automated discovery of policy optimization algorithms for language models. POISE maintains a structured, genealogically linked archive linking proposals, executable implementations, standardized evaluations, and natural-language reflections to support evidence-driven iteration. In mathematical reasoning experiments starting from GRPO, POISE evaluates 64 candidate algorithms and discovers improved mechanisms, including analytic-variance scaling and validity masking. The best variant improves weighted Overall from 47.8 to 52.5 (+4.6) and increases AIME25 pass@32 from 26.7% to 43.3%, demonstrating the feasibility of automated policy optimization discovery while supporting interpretable design principles.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-25T05:19:23Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Sirui Xia</name>\n    </author>\n    <author>\n      <name>Yikai Zhang</name>\n    </author>\n    <author>\n      <name>Aili Chen</name>\n    </author>\n    <author>\n      <name>Siye Wu</name>\n    </author>\n    <author>\n      <name>Siyu Yuan</name>\n    </author>\n    <author>\n      <name>Yanghua Xiao</name>\n    </author>\n  </entry>"
}