Research

Paper

AI LLM February 26, 2026

Evaluating and Improving Automated Repository-Level Rust Issue Resolution with LLM-based Agents

Authors

Jiahong Xiang, Wenxiao He, Xihua Wang, Hongliang Tian, Yuqun Zhang

Abstract

The Rust programming language presents a steep learning curve and significant coding challenges, making the automation of issue resolution essential for its broader adoption. Recently, LLM-powered code agents have shown remarkable success in resolving complex software engineering tasks, yet their application to Rust has been limited by the absence of a large-scale, repository-level benchmark. To bridge this gap, we introduce Rust-SWE-bench, a benchmark comprising 500 real-world, repository-level software engineering tasks from 34 diverse and popular Rust repositories. We then perform a comprehensive study on Rust-SWE-bench with four representative agents and four state-of-the-art LLMs to establish a foundational understanding of their capabilities and limitations in the Rust ecosystem. Our extensive study reveals that while ReAct-style agents are promising, i.e., resolving up to 21.2% of issues, they are limited by two primary challenges: comprehending repository-wide code structure and complying with Rust's strict type and trait semantics. We also find that issue reproduction is rather critical for task resolution. Inspired by these findings, we propose RUSTFORGER, a novel agentic approach that integrates an automated test environment setup with a Rust metaprogramming-driven dynamic tracing strategy to facilitate reliable issue reproduction and dynamic analysis. The evaluation shows that RUSTFORGER using Claude-Sonnet-3.7 significantly outperforms all baselines, resolving 28.6% of tasks on Rust-SWE-bench, i.e., a 34.9% improvement over the strongest baseline, and, in aggregate, uniquely solves 46 tasks that no other agent could solve across all adopted advanced LLMs.

Metadata

arXiv ID: 2602.22764
Provider: ARXIV
Primary Category: cs.SE
Published: 2026-02-26
Fetched: 2026-02-27 04:35

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.22764v1</id>\n    <title>Evaluating and Improving Automated Repository-Level Rust Issue Resolution with LLM-based Agents</title>\n    <updated>2026-02-26T08:54:09Z</updated>\n    <link href='https://arxiv.org/abs/2602.22764v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.22764v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>The Rust programming language presents a steep learning curve and significant coding challenges, making the automation of issue resolution essential for its broader adoption. Recently, LLM-powered code agents have shown remarkable success in resolving complex software engineering tasks, yet their application to Rust has been limited by the absence of a large-scale, repository-level benchmark. To bridge this gap, we introduce Rust-SWE-bench, a benchmark comprising 500 real-world, repository-level software engineering tasks from 34 diverse and popular Rust repositories. We then perform a comprehensive study on Rust-SWE-bench with four representative agents and four state-of-the-art LLMs to establish a foundational understanding of their capabilities and limitations in the Rust ecosystem. Our extensive study reveals that while ReAct-style agents are promising, i.e., resolving up to 21.2% of issues, they are limited by two primary challenges: comprehending repository-wide code structure and complying with Rust's strict type and trait semantics. We also find that issue reproduction is rather critical for task resolution. Inspired by these findings, we propose RUSTFORGER, a novel agentic approach that integrates an automated test environment setup with a Rust metaprogramming-driven dynamic tracing strategy to facilitate reliable issue reproduction and dynamic analysis. The evaluation shows that RUSTFORGER using Claude-Sonnet-3.7 significantly outperforms all baselines, resolving 28.6% of tasks on Rust-SWE-bench, i.e., a 34.9% improvement over the strongest baseline, and, in aggregate, uniquely solves 46 tasks that no other agent could solve across all adopted advanced LLMs.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.SE'/>\n    <published>2026-02-26T08:54:09Z</published>\n    <arxiv:comment>Accepted to the 48th International Conference on Software Engineering (ICSE 2026)</arxiv:comment>\n    <arxiv:primary_category term='cs.SE'/>\n    <arxiv:journal_ref>2026 IEEE/ACM 48th International Conference on Software Engineering (ICSE '26), April 12--18, 2026, Rio de Janeiro, Brazil</arxiv:journal_ref>\n    <author>\n      <name>Jiahong Xiang</name>\n    </author>\n    <author>\n      <name>Wenxiao He</name>\n    </author>\n    <author>\n      <name>Xihua Wang</name>\n    </author>\n    <author>\n      <name>Hongliang Tian</name>\n    </author>\n    <author>\n      <name>Yuqun Zhang</name>\n    </author>\n    <arxiv:doi>10.1145/3744916.3773108</arxiv:doi>\n    <link href='https://doi.org/10.1145/3744916.3773108' rel='related' title='doi'/>\n  </entry>"
}