Research

Paper

AI LLM March 18, 2026

ArchBench: Benchmarking Generative-AI for Software Architecture Tasks

Authors

Bassam Adnan, Aviral Gupta, Sreemaee Akshathala, Karthik Vaidhyanathan

Abstract

Benchmarks for large language models (LLMs) have progressed from snippet-level function generation to repository-level issue resolution, yet they overwhelmingly target implementation correctness. Software architecture tasks remain under-specified and difficult to compare across models, despite their central role in maintaining and evolving complex systems. We present ArchBench, the first unified platform for benchmarking LLM capabilities on software architecture tasks. ArchBench provides a command-line tool with a standardized pipeline for dataset download, inference with trajectory logging, and automated evaluation, alongside a public web interface with an interactive leaderboard. The platform is built around a plugin architecture where each task is a self-contained module, making it straightforward for the community to contribute new architectural tasks and evaluation results. We use the term LLMs broadly to encompass generative AI (GenAI) solutions for software engineering, including both standalone models and LLM-based coding agents equipped with tools. Both the CLI tool and the web platform are openly available to support reproducible research and community-driven growth of architectural benchmarking.

Metadata

arXiv ID: 2603.17833
Provider: ARXIV
Primary Category: cs.SE
Published: 2026-03-18
Fetched: 2026-03-19 06:01

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.17833v1</id>\n    <title>ArchBench: Benchmarking Generative-AI for Software Architecture Tasks</title>\n    <updated>2026-03-18T15:26:46Z</updated>\n    <link href='https://arxiv.org/abs/2603.17833v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.17833v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Benchmarks for large language models (LLMs) have progressed from snippet-level function generation to repository-level issue resolution, yet they overwhelmingly target implementation correctness. Software architecture tasks remain under-specified and difficult to compare across models, despite their central role in maintaining and evolving complex systems. We present ArchBench, the first unified platform for benchmarking LLM capabilities on software architecture tasks. ArchBench provides a command-line tool with a standardized pipeline for dataset download, inference with trajectory logging, and automated evaluation, alongside a public web interface with an interactive leaderboard. The platform is built around a plugin architecture where each task is a self-contained module, making it straightforward for the community to contribute new architectural tasks and evaluation results. We use the term LLMs broadly to encompass generative AI (GenAI) solutions for software engineering, including both standalone models and LLM-based coding agents equipped with tools. Both the CLI tool and the web platform are openly available to support reproducible research and community-driven growth of architectural benchmarking.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.SE'/>\n    <published>2026-03-18T15:26:46Z</published>\n    <arxiv:comment>5 pages, 3 figures, Software Architecture Showcase Track, ICSA 2026</arxiv:comment>\n    <arxiv:primary_category term='cs.SE'/>\n    <author>\n      <name>Bassam Adnan</name>\n    </author>\n    <author>\n      <name>Aviral Gupta</name>\n    </author>\n    <author>\n      <name>Sreemaee Akshathala</name>\n    </author>\n    <author>\n      <name>Karthik Vaidhyanathan</name>\n    </author>\n  </entry>"
}