Paper
AVO: Agentic Variation Operators for Autonomous Evolutionary Search
Authors
Terry Chen, Zhifan Ye, Bing Xu, Zihao Ye, Timmy Liu, Ali Hassani, Tianqi Chen, Andrew Kerr, Haicheng Wu, Yang Xu, Yu-Jung Chen, Hanfeng Chen, Aditya Kane, Ronny Krashinsky, Ming-Yu Liu, Vinod Grover, Luis Ceze, Roger Bringmann, John Tran, Wei Liu, Fung Xie, Michael Lightstone, Humphrey Shi
Abstract
Agentic Variation Operators (AVO) are a new family of evolutionary variation operators that replace the fixed mutation, crossover, and hand-designed heuristics of classical evolutionary search with autonomous coding agents. Rather than confining a language model to candidate generation within a prescribed pipeline, AVO instantiates variation as a self-directed agent loop that can consult the current lineage, a domain-specific knowledge base, and execution feedback to propose, repair, critique, and verify implementation edits. We evaluate AVO on attention, among the most aggressively optimized kernel targets in AI, on NVIDIA Blackwell (B200) GPUs. Over 7 days of continuous autonomous evolution on multi-head attention, AVO discovers kernels that outperform cuDNN by up to 3.5% and FlashAttention-4 by up to 10.5% across the evaluated configurations. The discovered optimizations transfer readily to grouped-query attention, requiring only 30 minutes of additional autonomous adaptation and yielding gains of up to 7.0% over cuDNN and 9.3% over FlashAttention-4. Together, these results show that agentic variation operators move beyond prior LLM-in-the-loop evolutionary pipelines by elevating the agent from candidate generator to variation operator, and can discover performance-critical micro-architectural optimizations that produce kernels surpassing state-of-the-art expert-engineered attention implementations on today's most advanced GPU hardware.
Metadata
Related papers
Vibe Coding XR: Accelerating AI + XR Prototyping with XR Blocks and Gemini
Ruofei Du, Benjamin Hersh, David Li, Nels Numan, Xun Qian, Yanhe Chen, Zhongy... • 2026-03-25
Comparing Developer and LLM Biases in Code Evaluation
Aditya Mittal, Ryan Shar, Zichu Wu, Shyam Agarwal, Tongshuang Wu, Chris Donah... • 2026-03-25
The Stochastic Gap: A Markovian Framework for Pre-Deployment Reliability and Oversight-Cost Auditing in Agentic Artificial Intelligence
Biplab Pal, Santanu Bhattacharya • 2026-03-25
Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA
Saahil Mathur, Ryan David Rittner, Vedant Ajit Thakur, Daniel Stuart Schiff, ... • 2026-03-25
MARCH: Multi-Agent Reinforced Self-Check for LLM Hallucination
Zhuo Li, Yupeng Zhang, Pengyu Cheng, Jiajun Song, Mengyu Zhou, Hao Li, Shujie... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.24517v1</id>\n <title>AVO: Agentic Variation Operators for Autonomous Evolutionary Search</title>\n <updated>2026-03-25T16:55:04Z</updated>\n <link href='https://arxiv.org/abs/2603.24517v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.24517v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>Agentic Variation Operators (AVO) are a new family of evolutionary variation operators that replace the fixed mutation, crossover, and hand-designed heuristics of classical evolutionary search with autonomous coding agents. Rather than confining a language model to candidate generation within a prescribed pipeline, AVO instantiates variation as a self-directed agent loop that can consult the current lineage, a domain-specific knowledge base, and execution feedback to propose, repair, critique, and verify implementation edits. We evaluate AVO on attention, among the most aggressively optimized kernel targets in AI, on NVIDIA Blackwell (B200) GPUs. Over 7 days of continuous autonomous evolution on multi-head attention, AVO discovers kernels that outperform cuDNN by up to 3.5% and FlashAttention-4 by up to 10.5% across the evaluated configurations. The discovered optimizations transfer readily to grouped-query attention, requiring only 30 minutes of additional autonomous adaptation and yielding gains of up to 7.0% over cuDNN and 9.3% over FlashAttention-4. Together, these results show that agentic variation operators move beyond prior LLM-in-the-loop evolutionary pipelines by elevating the agent from candidate generator to variation operator, and can discover performance-critical micro-architectural optimizations that produce kernels surpassing state-of-the-art expert-engineered attention implementations on today's most advanced GPU hardware.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n <published>2026-03-25T16:55:04Z</published>\n <arxiv:primary_category term='cs.LG'/>\n <author>\n <name>Terry Chen</name>\n </author>\n <author>\n <name>Zhifan Ye</name>\n </author>\n <author>\n <name>Bing Xu</name>\n </author>\n <author>\n <name>Zihao Ye</name>\n </author>\n <author>\n <name>Timmy Liu</name>\n </author>\n <author>\n <name>Ali Hassani</name>\n </author>\n <author>\n <name>Tianqi Chen</name>\n </author>\n <author>\n <name>Andrew Kerr</name>\n </author>\n <author>\n <name>Haicheng Wu</name>\n </author>\n <author>\n <name>Yang Xu</name>\n </author>\n <author>\n <name>Yu-Jung Chen</name>\n </author>\n <author>\n <name>Hanfeng Chen</name>\n </author>\n <author>\n <name>Aditya Kane</name>\n </author>\n <author>\n <name>Ronny Krashinsky</name>\n </author>\n <author>\n <name>Ming-Yu Liu</name>\n </author>\n <author>\n <name>Vinod Grover</name>\n </author>\n <author>\n <name>Luis Ceze</name>\n </author>\n <author>\n <name>Roger Bringmann</name>\n </author>\n <author>\n <name>John Tran</name>\n </author>\n <author>\n <name>Wei Liu</name>\n </author>\n <author>\n <name>Fung Xie</name>\n </author>\n <author>\n <name>Michael Lightstone</name>\n </author>\n <author>\n <name>Humphrey Shi</name>\n </author>\n </entry>"
}