Paper
Human-in-the-Loop LLM Grading for Handwritten Mathematics Assessments
Authors
Arne Vanhoyweghen, Vincent Holst, Melika Mobini, Lukas Van de Voorde, Tibo Vanleke, Bert Verbruggen, Brecht Verbeken, Andres Algaba, Sam Verboven, Marie-Anne Guerry, Filip Van Droogenbroeck, Vincent Ginis
Abstract
Providing timely and individualised feedback on handwritten student work is highly beneficial for learning but difficult to achieve at scale. This challenge has become more pressing as generative AI undermines the reliability of take-home assessments, shifting emphasis toward supervised, in-class evaluation. We present a scalable, end-to-end workflow for LLM-assisted grading of short, pen-and-paper assessments. The workflow spans (1) constructing solution keys, (2) developing detailed rubric-style grading keys used to guide the LLM, and (3) a grading procedure that combines automated scanning and anonymisation, multi-pass LLM scoring, automated consistency checks, and mandatory human verification. We deploy the system in two undergraduate mathematics courses using six low-stakes in-class tests. Empirically, LLM assistance reduces grading time by approximately 23% while achieving agreement comparable to, and in several cases tighter than, fully manual grading. Occasional model errors occur but are effectively contained by the hybrid design. Overall, our results show that carefully embedded human-in-the-loop LLM grading can substantially reduce workload while maintaining fairness and accuracy.
Metadata
Related papers
Vibe Coding XR: Accelerating AI + XR Prototyping with XR Blocks and Gemini
Ruofei Du, Benjamin Hersh, David Li, Nels Numan, Xun Qian, Yanhe Chen, Zhongy... • 2026-03-25
Comparing Developer and LLM Biases in Code Evaluation
Aditya Mittal, Ryan Shar, Zichu Wu, Shyam Agarwal, Tongshuang Wu, Chris Donah... • 2026-03-25
The Stochastic Gap: A Markovian Framework for Pre-Deployment Reliability and Oversight-Cost Auditing in Agentic Artificial Intelligence
Biplab Pal, Santanu Bhattacharya • 2026-03-25
Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA
Saahil Mathur, Ryan David Rittner, Vedant Ajit Thakur, Daniel Stuart Schiff, ... • 2026-03-25
MARCH: Multi-Agent Reinforced Self-Check for LLM Hallucination
Zhuo Li, Yupeng Zhang, Pengyu Cheng, Jiajun Song, Mengyu Zhou, Hao Li, Shujie... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.13083v1</id>\n <title>Human-in-the-Loop LLM Grading for Handwritten Mathematics Assessments</title>\n <updated>2026-03-13T15:32:09Z</updated>\n <link href='https://arxiv.org/abs/2603.13083v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.13083v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>Providing timely and individualised feedback on handwritten student work is highly beneficial for learning but difficult to achieve at scale. This challenge has become more pressing as generative AI undermines the reliability of take-home assessments, shifting emphasis toward supervised, in-class evaluation. We present a scalable, end-to-end workflow for LLM-assisted grading of short, pen-and-paper assessments. The workflow spans (1) constructing solution keys, (2) developing detailed rubric-style grading keys used to guide the LLM, and (3) a grading procedure that combines automated scanning and anonymisation, multi-pass LLM scoring, automated consistency checks, and mandatory human verification. We deploy the system in two undergraduate mathematics courses using six low-stakes in-class tests. Empirically, LLM assistance reduces grading time by approximately 23% while achieving agreement comparable to, and in several cases tighter than, fully manual grading. Occasional model errors occur but are effectively contained by the hybrid design. Overall, our results show that carefully embedded human-in-the-loop LLM grading can substantially reduce workload while maintaining fairness and accuracy.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.CY'/>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n <published>2026-03-13T15:32:09Z</published>\n <arxiv:comment>19 pages, 5 figures</arxiv:comment>\n <arxiv:primary_category term='cs.CY'/>\n <author>\n <name>Arne Vanhoyweghen</name>\n </author>\n <author>\n <name>Vincent Holst</name>\n </author>\n <author>\n <name>Melika Mobini</name>\n </author>\n <author>\n <name>Lukas Van de Voorde</name>\n </author>\n <author>\n <name>Tibo Vanleke</name>\n </author>\n <author>\n <name>Bert Verbruggen</name>\n </author>\n <author>\n <name>Brecht Verbeken</name>\n </author>\n <author>\n <name>Andres Algaba</name>\n </author>\n <author>\n <name>Sam Verboven</name>\n </author>\n <author>\n <name>Marie-Anne Guerry</name>\n </author>\n <author>\n <name>Filip Van Droogenbroeck</name>\n </author>\n <author>\n <name>Vincent Ginis</name>\n </author>\n </entry>"
}