Research

Paper

TESTING March 09, 2026

Divide and Predict: An Architecture for Input Space Partitioning and Enhanced Accuracy

Authors

Fenix W. Huang, Henning S. Mortveit, Christian M. Reidys

Abstract

In this article the authors develop an intrinsic measure for quantifying heterogeneity in training data for supervised learning. This measure is the variance of a random variable which factors through the influences of pairs of training points. The variance is shown to capture data heterogeneity and can thus be used to assess if a sample is a mixture of distributions. The authors prove that the data itself contains key information that supports a partitioning into blocks. Several proof of concept studies are provided that quantify the connection between variance and heterogeneity for EMNIST image data and synthetic data. The authors establish that variance is maximal for equal mixes of distributions, and detail how variance-based data purification followed by conventional training over blocks can lead to significant increases in test accuracy.

Metadata

arXiv ID: 2603.08649
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-09
Fetched: 2026-03-10 05:43

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.08649v1</id>\n    <title>Divide and Predict: An Architecture for Input Space Partitioning and Enhanced Accuracy</title>\n    <updated>2026-03-09T17:26:56Z</updated>\n    <link href='https://arxiv.org/abs/2603.08649v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.08649v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>In this article the authors develop an intrinsic measure for quantifying heterogeneity in training data for supervised learning. This measure is the variance of a random variable which factors through the influences of pairs of training points. The variance is shown to capture data heterogeneity and can thus be used to assess if a sample is a mixture of distributions. The authors prove that the data itself contains key information that supports a partitioning into blocks. Several proof of concept studies are provided that quantify the connection between variance and heterogeneity for EMNIST image data and synthetic data. The authors establish that variance is maximal for equal mixes of distributions, and detail how variance-based data purification followed by conventional training over blocks can lead to significant increases in test accuracy.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-03-09T17:26:56Z</published>\n    <arxiv:comment>Under review; 24 pages; 8 figures</arxiv:comment>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Fenix W. Huang</name>\n    </author>\n    <author>\n      <name>Henning S. Mortveit</name>\n    </author>\n    <author>\n      <name>Christian M. Reidys</name>\n    </author>\n  </entry>"
}