Research

Paper

AI LLM February 19, 2026

AgentConductor: Topology Evolution for Multi-Agent Competition-Level Code Generation

Authors

Siyu Wang, Ruotian Lu, Zhihao Yang, Yuchao Wang, Yanzhou Zhang, Lei Xu, Qimin Xu, Guojun Yin, Cailian Chen, Xinping Guan

Abstract

Large language model(LLM)-driven multi-agent systems(MAS) coordinate specialized agents through predefined interaction topologies and have shown promise for complex tasks such as competition-level code generation. Recent studies demonstrate that carefully designed multi-agent workflows and communication graphs can significantly improve code generation performance by leveraging collaborative reasoning. However, existing methods neither adapt topology density to task difficulty nor iteratively refine the topology within an instance using execution feedback, which leads to redundant communication and performance bottlenecks. To address these issues, we propose AgentConductor: a reinforcement learning-optimized MAS with an LLM-based orchestrator agent as its core, which enables end-to-end feedback-driven dynamic generation of interaction topologies. For each query, AgentConductor infers agent roles and task difficulty, then constructs a task-adapted, density-aware layered directed acyclic graph (DAG) topology, underpinned by two key innovations. First, we design a novel topological density function that captures communication-aware mathematical characterizations of multi-agent interactions. Second, we adopt difficulty interval partitioning to avoid excessive pruning for precise topological density upper bound measurement per difficulty level and finer-grained control. Empirically, across three competition-level and two foundational code datasets, AgentConductor achieves state-of-the-art accuracy, outperforming the strongest baseline by up to 14.6% in pass@1 accuracy, 13% in density reduction, and 68% in token cost reduction.

Metadata

arXiv ID: 2602.17100
Provider: ARXIV
Primary Category: cs.MA
Published: 2026-02-19
Fetched: 2026-02-21 18:51

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.17100v1</id>\n    <title>AgentConductor: Topology Evolution for Multi-Agent Competition-Level Code Generation</title>\n    <updated>2026-02-19T05:51:55Z</updated>\n    <link href='https://arxiv.org/abs/2602.17100v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.17100v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large language model(LLM)-driven multi-agent systems(MAS) coordinate specialized agents through predefined interaction topologies and have shown promise for complex tasks such as competition-level code generation. Recent studies demonstrate that carefully designed multi-agent workflows and communication graphs can significantly improve code generation performance by leveraging collaborative reasoning. However, existing methods neither adapt topology density to task difficulty nor iteratively refine the topology within an instance using execution feedback, which leads to redundant communication and performance bottlenecks. To address these issues, we propose AgentConductor: a reinforcement learning-optimized MAS with an LLM-based orchestrator agent as its core, which enables end-to-end feedback-driven dynamic generation of interaction topologies. For each query, AgentConductor infers agent roles and task difficulty, then constructs a task-adapted, density-aware layered directed acyclic graph (DAG) topology, underpinned by two key innovations. First, we design a novel topological density function that captures communication-aware mathematical characterizations of multi-agent interactions. Second, we adopt difficulty interval partitioning to avoid excessive pruning for precise topological density upper bound measurement per difficulty level and finer-grained control. Empirically, across three competition-level and two foundational code datasets, AgentConductor achieves state-of-the-art accuracy, outperforming the strongest baseline by up to 14.6% in pass@1 accuracy, 13% in density reduction, and 68% in token cost reduction.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.MA'/>\n    <published>2026-02-19T05:51:55Z</published>\n    <arxiv:primary_category term='cs.MA'/>\n    <author>\n      <name>Siyu Wang</name>\n    </author>\n    <author>\n      <name>Ruotian Lu</name>\n    </author>\n    <author>\n      <name>Zhihao Yang</name>\n    </author>\n    <author>\n      <name>Yuchao Wang</name>\n    </author>\n    <author>\n      <name>Yanzhou Zhang</name>\n    </author>\n    <author>\n      <name>Lei Xu</name>\n    </author>\n    <author>\n      <name>Qimin Xu</name>\n    </author>\n    <author>\n      <name>Guojun Yin</name>\n    </author>\n    <author>\n      <name>Cailian Chen</name>\n    </author>\n    <author>\n      <name>Xinping Guan</name>\n    </author>\n  </entry>"
}