Research

Paper

AI LLM March 13, 2026

Human-in-the-Loop LLM Grading for Handwritten Mathematics Assessments

Authors

Arne Vanhoyweghen, Vincent Holst, Melika Mobini, Lukas Van de Voorde, Tibo Vanleke, Bert Verbruggen, Brecht Verbeken, Andres Algaba, Sam Verboven, Marie-Anne Guerry, Filip Van Droogenbroeck, Vincent Ginis

Abstract

Providing timely and individualised feedback on handwritten student work is highly beneficial for learning but difficult to achieve at scale. This challenge has become more pressing as generative AI undermines the reliability of take-home assessments, shifting emphasis toward supervised, in-class evaluation. We present a scalable, end-to-end workflow for LLM-assisted grading of short, pen-and-paper assessments. The workflow spans (1) constructing solution keys, (2) developing detailed rubric-style grading keys used to guide the LLM, and (3) a grading procedure that combines automated scanning and anonymisation, multi-pass LLM scoring, automated consistency checks, and mandatory human verification. We deploy the system in two undergraduate mathematics courses using six low-stakes in-class tests. Empirically, LLM assistance reduces grading time by approximately 23% while achieving agreement comparable to, and in several cases tighter than, fully manual grading. Occasional model errors occur but are effectively contained by the hybrid design. Overall, our results show that carefully embedded human-in-the-loop LLM grading can substantially reduce workload while maintaining fairness and accuracy.

Metadata

arXiv ID: 2603.13083
Provider: ARXIV
Primary Category: cs.CY
Published: 2026-03-13
Fetched: 2026-03-16 06:01

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.13083v1</id>\n    <title>Human-in-the-Loop LLM Grading for Handwritten Mathematics Assessments</title>\n    <updated>2026-03-13T15:32:09Z</updated>\n    <link href='https://arxiv.org/abs/2603.13083v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.13083v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Providing timely and individualised feedback on handwritten student work is highly beneficial for learning but difficult to achieve at scale. This challenge has become more pressing as generative AI undermines the reliability of take-home assessments, shifting emphasis toward supervised, in-class evaluation. We present a scalable, end-to-end workflow for LLM-assisted grading of short, pen-and-paper assessments. The workflow spans (1) constructing solution keys, (2) developing detailed rubric-style grading keys used to guide the LLM, and (3) a grading procedure that combines automated scanning and anonymisation, multi-pass LLM scoring, automated consistency checks, and mandatory human verification. We deploy the system in two undergraduate mathematics courses using six low-stakes in-class tests. Empirically, LLM assistance reduces grading time by approximately 23% while achieving agreement comparable to, and in several cases tighter than, fully manual grading. Occasional model errors occur but are effectively contained by the hybrid design. Overall, our results show that carefully embedded human-in-the-loop LLM grading can substantially reduce workload while maintaining fairness and accuracy.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CY'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-13T15:32:09Z</published>\n    <arxiv:comment>19 pages, 5 figures</arxiv:comment>\n    <arxiv:primary_category term='cs.CY'/>\n    <author>\n      <name>Arne Vanhoyweghen</name>\n    </author>\n    <author>\n      <name>Vincent Holst</name>\n    </author>\n    <author>\n      <name>Melika Mobini</name>\n    </author>\n    <author>\n      <name>Lukas Van de Voorde</name>\n    </author>\n    <author>\n      <name>Tibo Vanleke</name>\n    </author>\n    <author>\n      <name>Bert Verbruggen</name>\n    </author>\n    <author>\n      <name>Brecht Verbeken</name>\n    </author>\n    <author>\n      <name>Andres Algaba</name>\n    </author>\n    <author>\n      <name>Sam Verboven</name>\n    </author>\n    <author>\n      <name>Marie-Anne Guerry</name>\n    </author>\n    <author>\n      <name>Filip Van Droogenbroeck</name>\n    </author>\n    <author>\n      <name>Vincent Ginis</name>\n    </author>\n  </entry>"
}