Paper
SOL-ExecBench: Speed-of-Light Benchmarking for Real-World GPU Kernels Against Hardware Limits
Authors
Edward Lin, Sahil Modi, Siva Kumar Sastry Hari, Qijing Huang, Zhifan Ye, Nestor Qin, Fengzhe Zhou, Yuan Zhang, Jingquan Wang, Sana Damani, Dheeraj Peri, Ouye Xie, Aditya Kane, Moshe Maor, Michael Behar, Triston Cao, Rishabh Mehta, Vartika Singh, Vikram Sharma Mailthody, Terry Chen, Zihao Ye, Hanfeng Chen, Tianqi Chen, Vinod Grover, Wei Chen, Wei Liu, Eric Chung, Luis Ceze, Roger Bringmann, Cyril Zeller, Michael Lightstone, Christos Kozyrakis, Humphrey Shi
Abstract
As agentic AI systems become increasingly capable of generating and optimizing GPU kernels, progress is constrained by benchmarks that reward speedup over software baselines rather than proximity to hardware-efficient execution. We present SOL-ExecBench, a benchmark of 235 CUDA kernel optimization problems extracted from 124 production and emerging AI models spanning language, diffusion, vision, audio, video, and hybrid architectures, targeting NVIDIA Blackwell GPUs. The benchmark covers forward and backward workloads across BF16, FP8, and NVFP4, including kernels whose best performance is expected to rely on Blackwell-specific capabilities. Unlike prior benchmarks that evaluate kernels primarily relative to software implementations, SOL-ExecBench measures performance against analytically derived Speed-of-Light (SOL) bounds computed by SOLAR, our pipeline for deriving hardware-grounded SOL bounds, yielding a fixed target for hardware-efficient optimization. We report a SOL Score that quantifies how much of the gap between a release-defined scoring baseline and the hardware SOL bound a candidate kernel closes. To support robust evaluation of agentic optimizers, we additionally provide a sandboxed harness with GPU clock locking, L2 cache clearing, isolated subprocess execution, and static analysis based checks against common reward-hacking strategies. SOL-ExecBench reframes GPU kernel benchmarking from beating a mutable software baseline to closing the remaining gap to hardware Speed-of-Light.
Metadata
Related papers
Vibe Coding XR: Accelerating AI + XR Prototyping with XR Blocks and Gemini
Ruofei Du, Benjamin Hersh, David Li, Nels Numan, Xun Qian, Yanhe Chen, Zhongy... • 2026-03-25
Comparing Developer and LLM Biases in Code Evaluation
Aditya Mittal, Ryan Shar, Zichu Wu, Shyam Agarwal, Tongshuang Wu, Chris Donah... • 2026-03-25
The Stochastic Gap: A Markovian Framework for Pre-Deployment Reliability and Oversight-Cost Auditing in Agentic Artificial Intelligence
Biplab Pal, Santanu Bhattacharya • 2026-03-25
Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA
Saahil Mathur, Ryan David Rittner, Vedant Ajit Thakur, Daniel Stuart Schiff, ... • 2026-03-25
MARCH: Multi-Agent Reinforced Self-Check for LLM Hallucination
Zhuo Li, Yupeng Zhang, Pengyu Cheng, Jiajun Song, Mengyu Zhou, Hao Li, Shujie... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.19173v1</id>\n <title>SOL-ExecBench: Speed-of-Light Benchmarking for Real-World GPU Kernels Against Hardware Limits</title>\n <updated>2026-03-19T17:30:02Z</updated>\n <link href='https://arxiv.org/abs/2603.19173v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.19173v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>As agentic AI systems become increasingly capable of generating and optimizing GPU kernels, progress is constrained by benchmarks that reward speedup over software baselines rather than proximity to hardware-efficient execution. We present SOL-ExecBench, a benchmark of 235 CUDA kernel optimization problems extracted from 124 production and emerging AI models spanning language, diffusion, vision, audio, video, and hybrid architectures, targeting NVIDIA Blackwell GPUs. The benchmark covers forward and backward workloads across BF16, FP8, and NVFP4, including kernels whose best performance is expected to rely on Blackwell-specific capabilities. Unlike prior benchmarks that evaluate kernels primarily relative to software implementations, SOL-ExecBench measures performance against analytically derived Speed-of-Light (SOL) bounds computed by SOLAR, our pipeline for deriving hardware-grounded SOL bounds, yielding a fixed target for hardware-efficient optimization. We report a SOL Score that quantifies how much of the gap between a release-defined scoring baseline and the hardware SOL bound a candidate kernel closes. To support robust evaluation of agentic optimizers, we additionally provide a sandboxed harness with GPU clock locking, L2 cache clearing, isolated subprocess execution, and static analysis based checks against common reward-hacking strategies. SOL-ExecBench reframes GPU kernel benchmarking from beating a mutable software baseline to closing the remaining gap to hardware Speed-of-Light.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n <published>2026-03-19T17:30:02Z</published>\n <arxiv:primary_category term='cs.LG'/>\n <author>\n <name>Edward Lin</name>\n </author>\n <author>\n <name>Sahil Modi</name>\n </author>\n <author>\n <name>Siva Kumar Sastry Hari</name>\n </author>\n <author>\n <name>Qijing Huang</name>\n </author>\n <author>\n <name>Zhifan Ye</name>\n </author>\n <author>\n <name>Nestor Qin</name>\n </author>\n <author>\n <name>Fengzhe Zhou</name>\n </author>\n <author>\n <name>Yuan Zhang</name>\n </author>\n <author>\n <name>Jingquan Wang</name>\n </author>\n <author>\n <name>Sana Damani</name>\n </author>\n <author>\n <name>Dheeraj Peri</name>\n </author>\n <author>\n <name>Ouye Xie</name>\n </author>\n <author>\n <name>Aditya Kane</name>\n </author>\n <author>\n <name>Moshe Maor</name>\n </author>\n <author>\n <name>Michael Behar</name>\n </author>\n <author>\n <name>Triston Cao</name>\n </author>\n <author>\n <name>Rishabh Mehta</name>\n </author>\n <author>\n <name>Vartika Singh</name>\n </author>\n <author>\n <name>Vikram Sharma Mailthody</name>\n </author>\n <author>\n <name>Terry Chen</name>\n </author>\n <author>\n <name>Zihao Ye</name>\n </author>\n <author>\n <name>Hanfeng Chen</name>\n </author>\n <author>\n <name>Tianqi Chen</name>\n </author>\n <author>\n <name>Vinod Grover</name>\n </author>\n <author>\n <name>Wei Chen</name>\n </author>\n <author>\n <name>Wei Liu</name>\n </author>\n <author>\n <name>Eric Chung</name>\n </author>\n <author>\n <name>Luis Ceze</name>\n </author>\n <author>\n <name>Roger Bringmann</name>\n </author>\n <author>\n <name>Cyril Zeller</name>\n </author>\n <author>\n <name>Michael Lightstone</name>\n </author>\n <author>\n <name>Christos Kozyrakis</name>\n </author>\n <author>\n <name>Humphrey Shi</name>\n </author>\n </entry>"
}