Research

Paper

TESTING February 24, 2026

Is Multi-Distribution Learning as Easy as PAC Learning: Sharp Rates with Bounded Label Noise

Authors

Rafael Hanashiro, Abhishek Shetty, Patrick Jaillet

Abstract

Towards understanding the statistical complexity of learning from heterogeneous sources, we study the problem of multi-distribution learning. Given $k$ data sources, the goal is to output a classifier for each source by exploiting shared structure to reduce sample complexity. We focus on the bounded label noise setting to determine whether the fast $1/ε$ rates achievable in single-task learning extend to this regime with minimal dependence on $k$. Surprisingly, we show that this is not the case. We demonstrate that learning across $k$ distributions inherently incurs slow rates scaling with $k/ε^2$, even under constant noise levels, unless each distribution is learned separately. A key technical contribution is a structured hypothesis-testing framework that captures the statistical cost of certifying near-optimality under bounded noise-a cost we show is unavoidable in the multi-distribution setting. Finally, we prove that when competing with the stronger benchmark of each distribution's optimal Bayes error, the sample complexity incurs a \textit{multiplicative} penalty in $k$. This establishes a \textit{statistical} separation between random classification noise and Massart noise, highlighting a fundamental barrier unique to learning from multiple sources.

Metadata

arXiv ID: 2602.21039
Provider: ARXIV
Primary Category: stat.ML
Published: 2026-02-24
Fetched: 2026-02-25 06:05

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.21039v1</id>\n    <title>Is Multi-Distribution Learning as Easy as PAC Learning: Sharp Rates with Bounded Label Noise</title>\n    <updated>2026-02-24T16:00:15Z</updated>\n    <link href='https://arxiv.org/abs/2602.21039v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.21039v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Towards understanding the statistical complexity of learning from heterogeneous sources, we study the problem of multi-distribution learning. Given $k$ data sources, the goal is to output a classifier for each source by exploiting shared structure to reduce sample complexity. We focus on the bounded label noise setting to determine whether the fast $1/ε$ rates achievable in single-task learning extend to this regime with minimal dependence on $k$. Surprisingly, we show that this is not the case. We demonstrate that learning across $k$ distributions inherently incurs slow rates scaling with $k/ε^2$, even under constant noise levels, unless each distribution is learned separately. A key technical contribution is a structured hypothesis-testing framework that captures the statistical cost of certifying near-optimality under bounded noise-a cost we show is unavoidable in the multi-distribution setting.\n  Finally, we prove that when competing with the stronger benchmark of each distribution's optimal Bayes error, the sample complexity incurs a \\textit{multiplicative} penalty in $k$. This establishes a \\textit{statistical} separation between random classification noise and Massart noise, highlighting a fundamental barrier unique to learning from multiple sources.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='stat.ML'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-02-24T16:00:15Z</published>\n    <arxiv:primary_category term='stat.ML'/>\n    <author>\n      <name>Rafael Hanashiro</name>\n    </author>\n    <author>\n      <name>Abhishek Shetty</name>\n    </author>\n    <author>\n      <name>Patrick Jaillet</name>\n    </author>\n  </entry>"
}