Research

Paper

TESTING March 04, 2026

Vibe Code Bench: Evaluating AI Models on End-to-End Web Application Development

Authors

Hung Tran, Langston Nashold, Rayan Krishnan, Antoine Bigeard, Alex Gu

Abstract

Code generation has emerged as one of AI's highest-impact use cases, yet existing benchmarks measure isolated tasks rather than the complete "zero-to-one" process of building a working application from scratch. We introduce Vibe Code Bench, a benchmark of 100 web application specifications (50 public validation, 50 held-out test) with 964 browser-based workflows comprising 10,131 substeps, evaluated against deployed applications by an autonomous browser agent. Across 16 frontier models, the best achieves only 58.0% accuracy on the test split, revealing that reliable end-to-end application development remains a frontier challenge. We identify self-testing during generation as a strong performance predictor (Pearson r=0.72), and show through a completed human alignment study that evaluator selection materially affects outcomes (31.8-93.6% pairwise step-level agreement). Our contributions include (1) a novel benchmark dataset and browser-based evaluation pipeline for end-to-end web application development, (2) a comprehensive evaluation of 16 frontier models with cost, latency, and error analysis, and (3) an evaluator alignment protocol with both cross-model and human annotation results.

Metadata

arXiv ID: 2603.04601
Provider: ARXIV
Primary Category: cs.SE
Published: 2026-03-04
Fetched: 2026-03-06 14:20

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.04601v1</id>\n    <title>Vibe Code Bench: Evaluating AI Models on End-to-End Web Application Development</title>\n    <updated>2026-03-04T21:00:33Z</updated>\n    <link href='https://arxiv.org/abs/2603.04601v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.04601v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Code generation has emerged as one of AI's highest-impact use cases, yet existing benchmarks measure isolated tasks rather than the complete \"zero-to-one\" process of building a working application from scratch. We introduce Vibe Code Bench, a benchmark of 100 web application specifications (50 public validation, 50 held-out test) with 964 browser-based workflows comprising 10,131 substeps, evaluated against deployed applications by an autonomous browser agent.\n  Across 16 frontier models, the best achieves only 58.0% accuracy on the test split, revealing that reliable end-to-end application development remains a frontier challenge. We identify self-testing during generation as a strong performance predictor (Pearson r=0.72), and show through a completed human alignment study that evaluator selection materially affects outcomes (31.8-93.6% pairwise step-level agreement).\n  Our contributions include (1) a novel benchmark dataset and browser-based evaluation pipeline for end-to-end web application development, (2) a comprehensive evaluation of 16 frontier models with cost, latency, and error analysis, and (3) an evaluator alignment protocol with both cross-model and human annotation results.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.SE'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-04T21:00:33Z</published>\n    <arxiv:comment>Live leaderboard hosted here: https://www.vals.ai/benchmarks/vibe-code. Preprint, currently under review. Benchmark first released Nov 2025</arxiv:comment>\n    <arxiv:primary_category term='cs.SE'/>\n    <author>\n      <name>Hung Tran</name>\n    </author>\n    <author>\n      <name>Langston Nashold</name>\n    </author>\n    <author>\n      <name>Rayan Krishnan</name>\n    </author>\n    <author>\n      <name>Antoine Bigeard</name>\n    </author>\n    <author>\n      <name>Alex Gu</name>\n    </author>\n  </entry>"
}