Paper
AudioCapBench: Quick Evaluation on Audio Captioning across Sound, Music, and Speech
Authors
Jielin Qiu, Jianguo Zhang, Zixiang Chen, Liangwei Yang, Ming Zhu, Juntao Tan, Haolin Chen, Wenting Zhao, Rithesh Murthy, Roshan Ram, Akshara Prabhakar, Shelby Heinecke, Caiming, Xiong, Silvio Savarese, Huan Wang
Abstract
We introduce AudioCapBench, a benchmark for evaluating audio captioning capabilities of large multimodal models. \method covers three distinct audio domains, including environmental sound, music, and speech, with 1,000 curated evaluation samples drawn from established datasets. We evaluate 13 models across two providers (OpenAI, Google Gemini) using both reference-based metrics (METEOR, BLEU, ROUGE-L) and an LLM-as-Judge framework that scores predictions on three orthogonal dimensions: \textit{accuracy} (semantic correctness), \textit{completeness} (coverage of reference content), and \textit{hallucination} (absence of fabricated content). Our results reveal that Gemini models generally outperform OpenAI models on overall captioning quality, with Gemini~3~Pro achieving the highest overall score (6.00/10), while OpenAI models exhibit lower hallucination rates. All models perform best on speech captioning and worst on music captioning. We release the benchmark as well as evaluation code to facilitate reproducible audio understanding research.
Metadata
Related papers
Vibe Coding XR: Accelerating AI + XR Prototyping with XR Blocks and Gemini
Ruofei Du, Benjamin Hersh, David Li, Nels Numan, Xun Qian, Yanhe Chen, Zhongy... • 2026-03-25
Comparing Developer and LLM Biases in Code Evaluation
Aditya Mittal, Ryan Shar, Zichu Wu, Shyam Agarwal, Tongshuang Wu, Chris Donah... • 2026-03-25
The Stochastic Gap: A Markovian Framework for Pre-Deployment Reliability and Oversight-Cost Auditing in Agentic Artificial Intelligence
Biplab Pal, Santanu Bhattacharya • 2026-03-25
Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA
Saahil Mathur, Ryan David Rittner, Vedant Ajit Thakur, Daniel Stuart Schiff, ... • 2026-03-25
MARCH: Multi-Agent Reinforced Self-Check for LLM Hallucination
Zhuo Li, Yupeng Zhang, Pengyu Cheng, Jiajun Song, Mengyu Zhou, Hao Li, Shujie... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2602.23649v1</id>\n <title>AudioCapBench: Quick Evaluation on Audio Captioning across Sound, Music, and Speech</title>\n <updated>2026-02-27T03:33:37Z</updated>\n <link href='https://arxiv.org/abs/2602.23649v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2602.23649v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>We introduce AudioCapBench, a benchmark for evaluating audio captioning capabilities of large multimodal models. \\method covers three distinct audio domains, including environmental sound, music, and speech, with 1,000 curated evaluation samples drawn from established datasets. We evaluate 13 models across two providers (OpenAI, Google Gemini) using both reference-based metrics (METEOR, BLEU, ROUGE-L) and an LLM-as-Judge framework that scores predictions on three orthogonal dimensions: \\textit{accuracy} (semantic correctness), \\textit{completeness} (coverage of reference content), and \\textit{hallucination} (absence of fabricated content). Our results reveal that Gemini models generally outperform OpenAI models on overall captioning quality, with Gemini~3~Pro achieving the highest overall score (6.00/10), while OpenAI models exhibit lower hallucination rates. All models perform best on speech captioning and worst on music captioning. We release the benchmark as well as evaluation code to facilitate reproducible audio understanding research.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.SD'/>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n <published>2026-02-27T03:33:37Z</published>\n <arxiv:primary_category term='cs.SD'/>\n <author>\n <name>Jielin Qiu</name>\n </author>\n <author>\n <name>Jianguo Zhang</name>\n </author>\n <author>\n <name>Zixiang Chen</name>\n </author>\n <author>\n <name>Liangwei Yang</name>\n </author>\n <author>\n <name>Ming Zhu</name>\n </author>\n <author>\n <name>Juntao Tan</name>\n </author>\n <author>\n <name>Haolin Chen</name>\n </author>\n <author>\n <name>Wenting Zhao</name>\n </author>\n <author>\n <name>Rithesh Murthy</name>\n </author>\n <author>\n <name>Roshan Ram</name>\n </author>\n <author>\n <name>Akshara Prabhakar</name>\n </author>\n <author>\n <name>Shelby Heinecke</name>\n </author>\n <author>\n <name> Caiming</name>\n </author>\n <author>\n <name> Xiong</name>\n </author>\n <author>\n <name>Silvio Savarese</name>\n </author>\n <author>\n <name>Huan Wang</name>\n </author>\n </entry>"
}