Research

Paper

AI LLM February 19, 2026

Using LLMs for Knowledge Component-level Correctness Labeling in Open-ended Coding Problems

Authors

Zhangqi Duan, Arnav Kankaria, Dhruv Kartik, Andrew Lan

Abstract

Fine-grained skill representations, commonly referred to as knowledge components (KCs), are fundamental to many approaches in student modeling and learning analytics. However, KC-level correctness labels are rarely available in real-world datasets, especially for open-ended programming tasks where solutions typically involve multiple KCs simultaneously. Simply propagating problem-level correctness to all associated KCs obscures partial mastery and often leads to poorly fitted learning curves. To address this challenge, we propose an automated framework that leverages large language models (LLMs) to label KC-level correctness directly from student-written code. Our method assesses whether each KC is correctly applied and further introduces a temporal context-aware Code-KC mapping mechanism to better align KCs with individual student code. We evaluate the resulting KC-level correctness labels in terms of learning curve fit and predictive performance using the power law of practice and the Additive Factors Model. Experimental results show that our framework leads to learning curves that are more consistent with cognitive theory and improves predictive performance, compared to baselines. Human evaluation further demonstrates substantial agreement between LLM and expert annotations.

Metadata

arXiv ID: 2602.17542
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-02-19
Fetched: 2026-02-21 18:51

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.17542v1</id>\n    <title>Using LLMs for Knowledge Component-level Correctness Labeling in Open-ended Coding Problems</title>\n    <updated>2026-02-19T16:58:34Z</updated>\n    <link href='https://arxiv.org/abs/2602.17542v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.17542v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Fine-grained skill representations, commonly referred to as knowledge components (KCs), are fundamental to many approaches in student modeling and learning analytics. However, KC-level correctness labels are rarely available in real-world datasets, especially for open-ended programming tasks where solutions typically involve multiple KCs simultaneously. Simply propagating problem-level correctness to all associated KCs obscures partial mastery and often leads to poorly fitted learning curves. To address this challenge, we propose an automated framework that leverages large language models (LLMs) to label KC-level correctness directly from student-written code. Our method assesses whether each KC is correctly applied and further introduces a temporal context-aware Code-KC mapping mechanism to better align KCs with individual student code. We evaluate the resulting KC-level correctness labels in terms of learning curve fit and predictive performance using the power law of practice and the Additive Factors Model. Experimental results show that our framework leads to learning curves that are more consistent with cognitive theory and improves predictive performance, compared to baselines. Human evaluation further demonstrates substantial agreement between LLM and expert annotations.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CY'/>\n    <published>2026-02-19T16:58:34Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Zhangqi Duan</name>\n    </author>\n    <author>\n      <name>Arnav Kankaria</name>\n    </author>\n    <author>\n      <name>Dhruv Kartik</name>\n    </author>\n    <author>\n      <name>Andrew Lan</name>\n    </author>\n  </entry>"
}