Research

Paper

TESTING February 24, 2026

Detecting Where Effects Occur by Testing Hypotheses in Order

Authors

Jake Bowers, David Kim, Nuole Chen

Abstract

Experimental evaluations of public policies often randomize a new intervention within many sites or blocks. After a report of an overall result -- statistically significant or not -- the natural question from a policy maker is: \emph{where} did any effects occur? Standard adjustments for multiple testing provide little power to answer this question. In simulations modeled after a 44-block education trial, the Hommel adjustment -- among the most powerful procedures controlling the family-wise error rate (FWER) -- detects effects in only 11\% of truly non-null blocks. We develop a procedure that tests hypotheses top-down through a tree: test the overall null at the root, then groups of blocks, then individual blocks, stopping any branch where the null is not rejected. In the same 44-block design, this approach detects effects in 44\% of non-null blocks -- roughly four times the detection rate. A stopping rule and valid tests at each node suffice for weak FWER control. We show that the strong-sense FWER depends on how rejection probabilities accumulate along paths through the tree. This yields a diagnostic: when power decays fast enough relative to branching, no adjustment is needed; otherwise, an adaptive $α$-adjustment restores control. We apply the method to 25 MDRC education trials and provide an R package, \texttt{manytestsr}.

Metadata

arXiv ID: 2602.21068
Provider: ARXIV
Primary Category: stat.ME
Published: 2026-02-24
Fetched: 2026-02-25 06:05

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.21068v1</id>\n    <title>Detecting Where Effects Occur by Testing Hypotheses in Order</title>\n    <updated>2026-02-24T16:29:45Z</updated>\n    <link href='https://arxiv.org/abs/2602.21068v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.21068v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Experimental evaluations of public policies often randomize a new intervention within many sites or blocks. After a report of an overall result -- statistically significant or not -- the natural question from a policy maker is: \\emph{where} did any effects occur? Standard adjustments for multiple testing provide little power to answer this question. In simulations modeled after a 44-block education trial, the Hommel adjustment -- among the most powerful procedures controlling the family-wise error rate (FWER) -- detects effects in only 11\\% of truly non-null blocks. We develop a procedure that tests hypotheses top-down through a tree: test the overall null at the root, then groups of blocks, then individual blocks, stopping any branch where the null is not rejected. In the same 44-block design, this approach detects effects in 44\\% of non-null blocks -- roughly four times the detection rate. A stopping rule and valid tests at each node suffice for weak FWER control. We show that the strong-sense FWER depends on how rejection probabilities accumulate along paths through the tree. This yields a diagnostic: when power decays fast enough relative to branching, no adjustment is needed; otherwise, an adaptive $α$-adjustment restores control. We apply the method to 25 MDRC education trials and provide an R package, \\texttt{manytestsr}.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='stat.ME'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='math.ST'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='stat.AP'/>\n    <published>2026-02-24T16:29:45Z</published>\n    <arxiv:primary_category term='stat.ME'/>\n    <author>\n      <name>Jake Bowers</name>\n    </author>\n    <author>\n      <name>David Kim</name>\n    </author>\n    <author>\n      <name>Nuole Chen</name>\n    </author>\n  </entry>"
}