Paper
An Expert Schema for Evaluating Large Language Model Errors in Scholarly Question-Answering Systems
Authors
Anna Martin-Boyle, William Humphreys, Martha Brown, Cara Leckey, Harmanpreet Kaur
Abstract
Large Language Models (LLMs) are transforming scholarly tasks like search and summarization, but their reliability remains uncertain. Current evaluation metrics for testing LLM reliability are primarily automated approaches that prioritize efficiency and scalability, but lack contextual nuance and fail to reflect how scientific domain experts assess LLM outputs in practice. We developed and validated a schema for evaluating LLM errors in scholarly question-answering systems that reflects the assessment strategies of practicing scientists. In collaboration with domain experts, we identified 20 error patterns across seven categories through thematic analysis of 68 question-answer pairs. We validated this schema through contextual inquiries with 10 additional scientists, which showed not only which errors experts naturally identify but also how structured evaluation schemas can help them detect previously overlooked issues. Domain experts use systematic assessment strategies, including technical precision testing, value-based evaluation, and meta-evaluation of their own practices. We discuss implications for supporting expert evaluation of LLM outputs, including opportunities for personalized, schema-driven tools that adapt to individual evaluation patterns and expertise levels.
Metadata
Related papers
Vibe Coding XR: Accelerating AI + XR Prototyping with XR Blocks and Gemini
Ruofei Du, Benjamin Hersh, David Li, Nels Numan, Xun Qian, Yanhe Chen, Zhongy... • 2026-03-25
Comparing Developer and LLM Biases in Code Evaluation
Aditya Mittal, Ryan Shar, Zichu Wu, Shyam Agarwal, Tongshuang Wu, Chris Donah... • 2026-03-25
The Stochastic Gap: A Markovian Framework for Pre-Deployment Reliability and Oversight-Cost Auditing in Agentic Artificial Intelligence
Biplab Pal, Santanu Bhattacharya • 2026-03-25
Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA
Saahil Mathur, Ryan David Rittner, Vedant Ajit Thakur, Daniel Stuart Schiff, ... • 2026-03-25
MARCH: Multi-Agent Reinforced Self-Check for LLM Hallucination
Zhuo Li, Yupeng Zhang, Pengyu Cheng, Jiajun Song, Mengyu Zhou, Hao Li, Shujie... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2602.21059v1</id>\n <title>An Expert Schema for Evaluating Large Language Model Errors in Scholarly Question-Answering Systems</title>\n <updated>2026-02-24T16:16:44Z</updated>\n <link href='https://arxiv.org/abs/2602.21059v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2602.21059v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>Large Language Models (LLMs) are transforming scholarly tasks like search and summarization, but their reliability remains uncertain. Current evaluation metrics for testing LLM reliability are primarily automated approaches that prioritize efficiency and scalability, but lack contextual nuance and fail to reflect how scientific domain experts assess LLM outputs in practice. We developed and validated a schema for evaluating LLM errors in scholarly question-answering systems that reflects the assessment strategies of practicing scientists. In collaboration with domain experts, we identified 20 error patterns across seven categories through thematic analysis of 68 question-answer pairs. We validated this schema through contextual inquiries with 10 additional scientists, which showed not only which errors experts naturally identify but also how structured evaluation schemas can help them detect previously overlooked issues. Domain experts use systematic assessment strategies, including technical precision testing, value-based evaluation, and meta-evaluation of their own practices. We discuss implications for supporting expert evaluation of LLM outputs, including opportunities for personalized, schema-driven tools that adapt to individual evaluation patterns and expertise levels.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.HC'/>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n <published>2026-02-24T16:16:44Z</published>\n <arxiv:comment>24 pages, 2 figures. Accepted at ACM CHI conference on Human Factors in Computing Systems, 2026</arxiv:comment>\n <arxiv:primary_category term='cs.HC'/>\n <author>\n <name>Anna Martin-Boyle</name>\n </author>\n <author>\n <name>William Humphreys</name>\n </author>\n <author>\n <name>Martha Brown</name>\n </author>\n <author>\n <name>Cara Leckey</name>\n </author>\n <author>\n <name>Harmanpreet Kaur</name>\n </author>\n <arxiv:doi>10.1145/3772318.3791843</arxiv:doi>\n <link href='https://doi.org/10.1145/3772318.3791843' rel='related' title='doi'/>\n </entry>"
}