Research

Paper

TESTING March 11, 2026

A Grammar of Machine Learning Workflows

Authors

Simon Roth

Abstract

Data leakage affected 294 published papers across 17 scientific fields (Kapoor & Narayanan, 2023). The dominant response has been documentation: checklists, linters, best-practice guides. Documentation does not prevent these failures. This paper proposes a structural remedy: a grammar that decomposes the supervised learning lifecycle into 7 kernel primitives connected by a typed directed acyclic graph (DAG), with four hard constraints that reject the two most damaging leakage classes at call time. The grammar's core contribution is the terminal assess constraint: a runtime-enforced evaluate/assess boundary where repeated test-set assessment is rejected by a guard on a nominally distinct Evidence type. A companion study across 2,047 experimental instances quantifies why this matters: selection leakage inflates performance by d_z = 0.93 and memorization leakage by d_z = 0.53-1.11. Three separate implementations (Python, R, and Julia) confirm the claims. The appendix specification lets anyone build a conforming version.

Metadata

arXiv ID: 2603.10742
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-11
Fetched: 2026-03-12 04:21

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.10742v1</id>\n    <title>A Grammar of Machine Learning Workflows</title>\n    <updated>2026-03-11T13:15:33Z</updated>\n    <link href='https://arxiv.org/abs/2603.10742v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.10742v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Data leakage affected 294 published papers across 17 scientific fields (Kapoor &amp; Narayanan, 2023). The dominant response has been documentation: checklists, linters, best-practice guides. Documentation does not prevent these failures. This paper proposes a structural remedy: a grammar that decomposes the supervised learning lifecycle into 7 kernel primitives connected by a typed directed acyclic graph (DAG), with four hard constraints that reject the two most damaging leakage classes at call time. The grammar's core contribution is the terminal assess constraint: a runtime-enforced evaluate/assess boundary where repeated test-set assessment is rejected by a guard on a nominally distinct Evidence type. A companion study across 2,047 experimental instances quantifies why this matters: selection leakage inflates performance by d_z = 0.93 and memorization leakage by d_z = 0.53-1.11. Three separate implementations (Python, R, and Julia) confirm the claims. The appendix specification lets anyone build a conforming version.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-03-11T13:15:33Z</published>\n    <arxiv:comment>37 pages, 1 figure, 15 tables. Three implementations: Python (PyPI: mlw), R (CRAN: ml), Julia. Code: github.com/epagogy/ml</arxiv:comment>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Simon Roth</name>\n    </author>\n    <arxiv:doi>10.5281/zenodo.18905073</arxiv:doi>\n    <link href='https://doi.org/10.5281/zenodo.18905073' rel='related' title='doi'/>\n  </entry>"
}