Research

Paper

AI LLM March 03, 2026

From Heuristic Selection to Automated Algorithm Design: LLMs Benefit from Strong Priors

Authors

Qi Huang, Furong Ye, Ananta Shahane, Thomas Bäck, Niki van Stein

Abstract

Large Language Models (LLMs) have already been widely adopted for automated algorithm design, demonstrating strong abilities in generating and evolving algorithms across various fields. Existing work has largely focused on examining their effectiveness in solving specific problems, with search strategies primarily guided by adaptive prompt designs. In this paper, through investigating the token-wise attribution of the prompts to LLM-generated algorithmic codes, we show that providing high-quality algorithmic code examples can substantially improve the performance of the LLM-driven optimization. Building upon this insight, we propose leveraging prior benchmark algorithms to guide LLM-driven optimization and demonstrate superior performance on two black-box optimization benchmarks: the pseudo-Boolean optimization suite (pbo) and the black-box optimization suite (bbob). Our findings highlight the value of integrating benchmarking studies to enhance both efficiency and robustness of the LLM-driven black-box optimization methods.

Metadata

arXiv ID: 2603.02792
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-03
Fetched: 2026-03-04 03:41

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.02792v1</id>\n    <title>From Heuristic Selection to Automated Algorithm Design: LLMs Benefit from Strong Priors</title>\n    <updated>2026-03-03T09:27:52Z</updated>\n    <link href='https://arxiv.org/abs/2603.02792v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.02792v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large Language Models (LLMs) have already been widely adopted for automated algorithm design, demonstrating strong abilities in generating and evolving algorithms across various fields. Existing work has largely focused on examining their effectiveness in solving specific problems, with search strategies primarily guided by adaptive prompt designs. In this paper, through investigating the token-wise attribution of the prompts to LLM-generated algorithmic codes, we show that providing high-quality algorithmic code examples can substantially improve the performance of the LLM-driven optimization. Building upon this insight, we propose leveraging prior benchmark algorithms to guide LLM-driven optimization and demonstrate superior performance on two black-box optimization benchmarks: the pseudo-Boolean optimization suite (pbo) and the black-box optimization suite (bbob). Our findings highlight the value of integrating benchmarking studies to enhance both efficiency and robustness of the LLM-driven black-box optimization methods.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.NE'/>\n    <published>2026-03-03T09:27:52Z</published>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Qi Huang</name>\n    </author>\n    <author>\n      <name>Furong Ye</name>\n    </author>\n    <author>\n      <name>Ananta Shahane</name>\n    </author>\n    <author>\n      <name>Thomas Bäck</name>\n    </author>\n    <author>\n      <name>Niki van Stein</name>\n    </author>\n  </entry>"
}