Research

Paper

TESTING March 13, 2026

EvolveCoder: Evolving Test Cases via Adversarial Verification for Code Reinforcement Learning

Authors

Chi Ruan, Dongfu Jiang, Huaye Zeng, Ping Nie, Wenhu Chen

Abstract

Reinforcement learning with verifiable rewards (RLVR) is a promising approach for improving code generation in large language models, but its effectiveness is limited by weak and static verification signals in existing coding RL datasets. In this paper, we propose a solution-conditioned and adversarial verification framework that iteratively refines test cases based on the execution behaviors of candidate solutions, with the goal of increasing difficulty, improving discriminative power, and reducing redundancy. Based on this framework, we introduce EvolveCoder-22k, a large-scale coding reinforcement learning dataset constructed through multiple rounds of adversarial test case evolution. Empirical analysis shows that iterative refinement substantially strengthens verification, with pass@1 decreasing from 43.80 to 31.22. Reinforcement learning on EvolveCoder-22k yields stable optimization and consistent performance gains, improving Qwen3-4B by an average of 4.2 points across four downstream benchmarks and outperforming strong 4B-scale baselines. Our results highlight the importance of adversarial, solution-conditioned verification for effective and scalable reinforcement learning in code generation.

Metadata

arXiv ID: 2603.12698
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-13
Fetched: 2026-03-16 06:01

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.12698v1</id>\n    <title>EvolveCoder: Evolving Test Cases via Adversarial Verification for Code Reinforcement Learning</title>\n    <updated>2026-03-13T06:26:50Z</updated>\n    <link href='https://arxiv.org/abs/2603.12698v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.12698v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Reinforcement learning with verifiable rewards (RLVR) is a promising approach for improving code generation in large language models, but its effectiveness is limited by weak and static verification signals in existing coding RL datasets. In this paper, we propose a solution-conditioned and adversarial verification framework that iteratively refines test cases based on the execution behaviors of candidate solutions, with the goal of increasing difficulty, improving discriminative power, and reducing redundancy. Based on this framework, we introduce EvolveCoder-22k, a large-scale coding reinforcement learning dataset constructed through multiple rounds of adversarial test case evolution. Empirical analysis shows that iterative refinement substantially strengthens verification, with pass@1 decreasing from 43.80 to 31.22. Reinforcement learning on EvolveCoder-22k yields stable optimization and consistent performance gains, improving Qwen3-4B by an average of 4.2 points across four downstream benchmarks and outperforming strong 4B-scale baselines. Our results highlight the importance of adversarial, solution-conditioned verification for effective and scalable reinforcement learning in code generation.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-13T06:26:50Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Chi Ruan</name>\n    </author>\n    <author>\n      <name>Dongfu Jiang</name>\n    </author>\n    <author>\n      <name>Huaye Zeng</name>\n    </author>\n    <author>\n      <name>Ping Nie</name>\n    </author>\n    <author>\n      <name>Wenhu Chen</name>\n    </author>\n  </entry>"
}