Research

Paper

AI LLM March 18, 2026

DebugLM: Learning Traceable Training Data Provenance for LLMs

Authors

Wenjie Jacky Mo, Qin Liu, Xiaofei Wen, Wenxuan Zhou, Zhe Zhao, Muhao Chen

Abstract

Large language models (LLMs) are trained through multi-stage pipelines over heterogeneous data sources, yet developers lack a principled way to pinpoint the specific data responsible for an observed behavior. This lack of observability reduces debugging to reactive patching and makes failures prone to recur under distribution shift or subsequent model updates. To address this limitation, we propose DebugLM, a framework that equips LLMs with built-in data provenance, enabling them to explicitly trace the origins of their behaviors to specific training data sources. Specifically, the model learns to associate its responses with unique provenance tags that indicate the responsible dataset, empowering developers to precisely identify where undesirable behaviors are learned. Building on this capability, DebugLM further supports targeted test-time remediation, enabling developers to selectively trigger targeted refusal for specified data sources without retraining or modifying model parameters. Experiments demonstrate that DebugLM provides accurate behavior tracing in multi-stage training pipelines and effective test-time remediation while preserving the general utility of the model.

Metadata

arXiv ID: 2603.17884
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-18
Fetched: 2026-03-19 06:01

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.17884v1</id>\n    <title>DebugLM: Learning Traceable Training Data Provenance for LLMs</title>\n    <updated>2026-03-18T16:06:21Z</updated>\n    <link href='https://arxiv.org/abs/2603.17884v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.17884v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large language models (LLMs) are trained through multi-stage pipelines over heterogeneous data sources, yet developers lack a principled way to pinpoint the specific data responsible for an observed behavior. This lack of observability reduces debugging to reactive patching and makes failures prone to recur under distribution shift or subsequent model updates. To address this limitation, we propose DebugLM, a framework that equips LLMs with built-in data provenance, enabling them to explicitly trace the origins of their behaviors to specific training data sources. Specifically, the model learns to associate its responses with unique provenance tags that indicate the responsible dataset, empowering developers to precisely identify where undesirable behaviors are learned. Building on this capability, DebugLM further supports targeted test-time remediation, enabling developers to selectively trigger targeted refusal for specified data sources without retraining or modifying model parameters. Experiments demonstrate that DebugLM provides accurate behavior tracing in multi-stage training pipelines and effective test-time remediation while preserving the general utility of the model.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-18T16:06:21Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Wenjie Jacky Mo</name>\n    </author>\n    <author>\n      <name>Qin Liu</name>\n    </author>\n    <author>\n      <name>Xiaofei Wen</name>\n    </author>\n    <author>\n      <name>Wenxuan Zhou</name>\n    </author>\n    <author>\n      <name>Zhe Zhao</name>\n    </author>\n    <author>\n      <name>Muhao Chen</name>\n    </author>\n  </entry>"
}