Paper
Rethinking Ground Truth: A Case Study on Human Label Variation in MLLM Benchmarking
Authors
Tomas Ruiz, Tanalp Agustoslu, Carsten Schwemmer
Abstract
Human Label Variation (HLV), i.e. systematic differences among annotators' judgments, remains underexplored in benchmarks despite rapid progress in large language model (LLM) development. We address this gap by introducing an evaluation protocol for multimodal large language model (MLLM) benchmarking that explicitly accounts for two conditions: (1) human label agreement and (2) disagreement. We apply this protocol to two state-of-the-art MLLM families (Gemma 3, Qwen 2.5 VL) using non-aggregated human annotations from a social media content classification dataset. Across tasks, we find that larger models tend to perform best on high-agreement subsets, yet often underperform medium-sized models when human disagreement is high, indicating that parameter count alone does not determine sensitivity to ambiguity and subjectivity. These results show that benchmarks based solely on consensus labels can overstate model capabilities in such domains and that incorporating human label variation yields more realistic and robust assessments of MLLMs in content moderation pipelines.
Metadata
Related papers
Vibe Coding XR: Accelerating AI + XR Prototyping with XR Blocks and Gemini
Ruofei Du, Benjamin Hersh, David Li, Nels Numan, Xun Qian, Yanhe Chen, Zhongy... • 2026-03-25
Comparing Developer and LLM Biases in Code Evaluation
Aditya Mittal, Ryan Shar, Zichu Wu, Shyam Agarwal, Tongshuang Wu, Chris Donah... • 2026-03-25
The Stochastic Gap: A Markovian Framework for Pre-Deployment Reliability and Oversight-Cost Auditing in Agentic Artificial Intelligence
Biplab Pal, Santanu Bhattacharya • 2026-03-25
Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA
Saahil Mathur, Ryan David Rittner, Vedant Ajit Thakur, Daniel Stuart Schiff, ... • 2026-03-25
MARCH: Multi-Agent Reinforced Self-Check for LLM Hallucination
Zhuo Li, Yupeng Zhang, Pengyu Cheng, Jiajun Song, Mengyu Zhou, Hao Li, Shujie... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.19744v1</id>\n <title>Rethinking Ground Truth: A Case Study on Human Label Variation in MLLM Benchmarking</title>\n <updated>2026-03-20T08:29:05Z</updated>\n <link href='https://arxiv.org/abs/2603.19744v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.19744v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>Human Label Variation (HLV), i.e. systematic differences among annotators' judgments, remains underexplored in benchmarks despite rapid progress in large language model (LLM) development. We address this gap by introducing an evaluation protocol for multimodal large language model (MLLM) benchmarking that explicitly accounts for two conditions: (1) human label agreement and (2) disagreement. We apply this protocol to two state-of-the-art MLLM families (Gemma 3, Qwen 2.5 VL) using non-aggregated human annotations from a social media content classification dataset. Across tasks, we find that larger models tend to perform best on high-agreement subsets, yet often underperform medium-sized models when human disagreement is high, indicating that parameter count alone does not determine sensitivity to ambiguity and subjectivity. These results show that benchmarks based solely on consensus labels can overstate model capabilities in such domains and that incorporating human label variation yields more realistic and robust assessments of MLLMs in content moderation pipelines.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n <published>2026-03-20T08:29:05Z</published>\n <arxiv:comment>6 pages, 3 tables, 1 figure</arxiv:comment>\n <arxiv:primary_category term='cs.CL'/>\n <arxiv:journal_ref>2025 IEEE International Conference on Big Data (BigData), 2025</arxiv:journal_ref>\n <author>\n <name>Tomas Ruiz</name>\n </author>\n <author>\n <name>Tanalp Agustoslu</name>\n </author>\n <author>\n <name>Carsten Schwemmer</name>\n </author>\n <arxiv:doi>10.1109/BigData66926.2025.11401919</arxiv:doi>\n <link href='https://doi.org/10.1109/BigData66926.2025.11401919' rel='related' title='doi'/>\n </entry>"
}