Research

Paper

AI LLM March 13, 2026

Efficient and Interpretable Multi-Agent LLM Routing via Ant Colony Optimization

Authors

Xudong Wang, Chaoning Zhang, Jiaquan Zhang, Chenghao Li, Qigan Sun, Sung-Ho Bae, Peng Wang, Ning Xie, Jie Zou, Yang Yang, Hengtao Shen

Abstract

Large Language Model (LLM)-driven Multi-Agent Systems (MAS) have demonstrated strong capability in complex reasoning and tool use, and heterogeneous agent pools further broaden the quality--cost trade-off space. Despite these advances, real-world deployment is often constrained by high inference cost, latency, and limited transparency, which hinders scalable and efficient routing. Existing routing strategies typically rely on expensive LLM-based selectors or static policies, and offer limited controllability for semantic-aware routing under dynamic loads and mixed intents, often resulting in unstable performance and inefficient resource utilization. To address these limitations, we propose AMRO-S, an efficient and interpretable routing framework for Multi-Agent Systems (MAS). AMRO-S models MAS routing as a semantic-conditioned path selection problem, enhancing routing performance through three key mechanisms: First, it leverages a supervised fine-tuned (SFT) small language model for intent inference, providing a low-overhead semantic interface for each query; second, it decomposes routing memory into task-specific pheromone specialists, reducing cross-task interference and optimizing path selection under mixed workloads; finally, it employs a quality-gated asynchronous update mechanism to decouple inference from learning, optimizing routing without increasing latency. Extensive experiments on five public benchmarks and high-concurrency stress tests demonstrate that AMRO-S consistently improves the quality--cost trade-off over strong routing baselines, while providing traceable routing evidence through structured pheromone patterns.

Metadata

arXiv ID: 2603.12933
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-03-13
Fetched: 2026-03-16 06:01

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.12933v1</id>\n    <title>Efficient and Interpretable Multi-Agent LLM Routing via Ant Colony Optimization</title>\n    <updated>2026-03-13T12:26:05Z</updated>\n    <link href='https://arxiv.org/abs/2603.12933v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.12933v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large Language Model (LLM)-driven Multi-Agent Systems (MAS) have demonstrated strong capability in complex reasoning and tool use, and heterogeneous agent pools further broaden the quality--cost trade-off space. Despite these advances, real-world deployment is often constrained by high inference cost, latency, and limited transparency, which hinders scalable and efficient routing. Existing routing strategies typically rely on expensive LLM-based selectors or static policies, and offer limited controllability for semantic-aware routing under dynamic loads and mixed intents, often resulting in unstable performance and inefficient resource utilization. To address these limitations, we propose AMRO-S, an efficient and interpretable routing framework for Multi-Agent Systems (MAS). AMRO-S models MAS routing as a semantic-conditioned path selection problem, enhancing routing performance through three key mechanisms: First, it leverages a supervised fine-tuned (SFT) small language model for intent inference, providing a low-overhead semantic interface for each query; second, it decomposes routing memory into task-specific pheromone specialists, reducing cross-task interference and optimizing path selection under mixed workloads; finally, it employs a quality-gated asynchronous update mechanism to decouple inference from learning, optimizing routing without increasing latency. Extensive experiments on five public benchmarks and high-concurrency stress tests demonstrate that AMRO-S consistently improves the quality--cost trade-off over strong routing baselines, while providing traceable routing evidence through structured pheromone patterns.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-13T12:26:05Z</published>\n    <arxiv:comment>11 pages, 3 figures, submitted to IEEE Transactions on Artificial Intelligence</arxiv:comment>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Xudong Wang</name>\n    </author>\n    <author>\n      <name>Chaoning Zhang</name>\n    </author>\n    <author>\n      <name>Jiaquan Zhang</name>\n    </author>\n    <author>\n      <name>Chenghao Li</name>\n    </author>\n    <author>\n      <name>Qigan Sun</name>\n    </author>\n    <author>\n      <name>Sung-Ho Bae</name>\n    </author>\n    <author>\n      <name>Peng Wang</name>\n    </author>\n    <author>\n      <name>Ning Xie</name>\n    </author>\n    <author>\n      <name>Jie Zou</name>\n    </author>\n    <author>\n      <name>Yang Yang</name>\n    </author>\n    <author>\n      <name>Hengtao Shen</name>\n    </author>\n  </entry>"
}