Research

Paper

TESTING March 04, 2026

Optimal Prediction-Augmented Algorithms for Testing Independence of Distributions

Authors

Maryam Aliakbarpour, Alireza Azizi, Ria Stevens

Abstract

Independence testing is a fundamental problem in statistical inference: given samples from a joint distribution $p$ over multiple random variables, the goal is to determine whether $p$ is a product distribution or is $ε$-far from all product distributions in total variation distance. In the non-parametric finite-sample regime, this task is notoriously expensive, as the minimax sample complexity scales polynomially with the support size. In this work, we move beyond these worst-case limitations by leveraging the framework of \textit{augmented distribution testing}. We design independence testers that incorporate auxiliary, but potentially untrustworthy, predictive information. Our framework ensures that the tester remains robust, maintaining worst-case validity regardless of the prediction's quality, while significantly improving sample efficiency when the prediction is accurate. Our main contributions include: (i) a bivariate independence tester for discrete distributions that adaptively reduces sample complexity based on the prediction error; (ii) a generalization to the high-dimensional multivariate setting for testing the independence of $d$ random variables; and (iii) matching minimax lower bounds demonstrating that our testers achieve optimal sample complexity.

Metadata

arXiv ID: 2603.04635
Provider: ARXIV
Primary Category: stat.ML
Published: 2026-03-04
Fetched: 2026-03-06 14:20

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.04635v1</id>\n    <title>Optimal Prediction-Augmented Algorithms for Testing Independence of Distributions</title>\n    <updated>2026-03-04T21:55:08Z</updated>\n    <link href='https://arxiv.org/abs/2603.04635v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.04635v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Independence testing is a fundamental problem in statistical inference: given samples from a joint distribution $p$ over multiple random variables, the goal is to determine whether $p$ is a product distribution or is $ε$-far from all product distributions in total variation distance. In the non-parametric finite-sample regime, this task is notoriously expensive, as the minimax sample complexity scales polynomially with the support size. In this work, we move beyond these worst-case limitations by leveraging the framework of \\textit{augmented distribution testing}. We design independence testers that incorporate auxiliary, but potentially untrustworthy, predictive information. Our framework ensures that the tester remains robust, maintaining worst-case validity regardless of the prediction's quality, while significantly improving sample efficiency when the prediction is accurate. Our main contributions include: (i) a bivariate independence tester for discrete distributions that adaptively reduces sample complexity based on the prediction error; (ii) a generalization to the high-dimensional multivariate setting for testing the independence of $d$ random variables; and (iii) matching minimax lower bounds demonstrating that our testers achieve optimal sample complexity.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='stat.ML'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.DS'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-03-04T21:55:08Z</published>\n    <arxiv:primary_category term='stat.ML'/>\n    <author>\n      <name>Maryam Aliakbarpour</name>\n    </author>\n    <author>\n      <name>Alireza Azizi</name>\n    </author>\n    <author>\n      <name>Ria Stevens</name>\n    </author>\n  </entry>"
}