Research

Paper

AI LLM February 27, 2026

When LLMs Help -- and Hurt -- Teaching Assistants in Proof-Based Courses

Authors

Romina Mahinpei, Sofiia Druchyna, Manoel Horta Ribeiro

Abstract

Teaching assistants (TAs) are essential to grading and feedback provision in proof-based courses, yet these tasks are time-intensive and difficult to scale. Although Large Language Models (LLMs) have been studied for grading and feedback, their effectiveness in proof-based courses is still unknown. Before designing LLM-based systems for this context, a necessary prerequisite is to understand whether LLMs can meaningfully assist TAs with grading and feedback. As such, we present a multi-part case study functioning as a technology probe in an undergraduate proof-based course. We compare rubric-based grading decisions made by an LLM and TAs with varying levels of expertise and examine TAs' perceptions of feedback generated by an LLM. We find substantial disagreement between LLMs and TAs on grading decisions but that LLM-generated feedback can still be useful to TAs for submissions with major errors. We conclude by discussing design implications for human-AI grading and feedback systems in proof-based courses.

Metadata

arXiv ID: 2602.23635
Provider: ARXIV
Primary Category: cs.HC
Published: 2026-02-27
Fetched: 2026-03-02 06:04

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.23635v1</id>\n    <title>When LLMs Help -- and Hurt -- Teaching Assistants in Proof-Based Courses</title>\n    <updated>2026-02-27T03:13:52Z</updated>\n    <link href='https://arxiv.org/abs/2602.23635v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.23635v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Teaching assistants (TAs) are essential to grading and feedback provision in proof-based courses, yet these tasks are time-intensive and difficult to scale. Although Large Language Models (LLMs) have been studied for grading and feedback, their effectiveness in proof-based courses is still unknown. Before designing LLM-based systems for this context, a necessary prerequisite is to understand whether LLMs can meaningfully assist TAs with grading and feedback. As such, we present a multi-part case study functioning as a technology probe in an undergraduate proof-based course. We compare rubric-based grading decisions made by an LLM and TAs with varying levels of expertise and examine TAs' perceptions of feedback generated by an LLM. We find substantial disagreement between LLMs and TAs on grading decisions but that LLM-generated feedback can still be useful to TAs for submissions with major errors. We conclude by discussing design implications for human-AI grading and feedback systems in proof-based courses.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.HC'/>\n    <published>2026-02-27T03:13:52Z</published>\n    <arxiv:primary_category term='cs.HC'/>\n    <author>\n      <name>Romina Mahinpei</name>\n    </author>\n    <author>\n      <name>Sofiia Druchyna</name>\n    </author>\n    <author>\n      <name>Manoel Horta Ribeiro</name>\n    </author>\n  </entry>"
}