Research

Paper

TESTING March 03, 2026

LOO-PIT predictive model checking

Authors

Herman Tesso, Aki Vehtari

Abstract

We consider predictive checking for Bayesian model assessment using leave-one-out probability integral transform (LOO-PIT). LOO-PIT values are conditional cumulative predictive probabilities given LOO predictive distributions and corresponding left out observations. For a well-calibrated model, LOO-PIT values should be near uniformly distributed, but in the finite sample case they are not independent, due to LOO predictive distributions being determined by nearly the same data (all but one observation). We prove that this dependency is non-negligible in the finite case and depends on model complexity. We propose three testing procedures that can be used for continuous and discrete dependent uniform values. We also propose an automated graphical method for visualizing local departures from the null. Extensive numerical experiments on simulated and real datasets demonstrate that the proposed tests achieve competitive performance overall and have much higher power than standard uniformity tests based on the independence assumption that inevitably lead to lower than expected rejection rate.

Metadata

arXiv ID: 2603.02928
Provider: ARXIV
Primary Category: stat.ME
Published: 2026-03-03
Fetched: 2026-03-04 03:41

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.02928v1</id>\n    <title>LOO-PIT predictive model checking</title>\n    <updated>2026-03-03T12:34:52Z</updated>\n    <link href='https://arxiv.org/abs/2603.02928v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.02928v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>We consider predictive checking for Bayesian model assessment using leave-one-out probability integral transform (LOO-PIT). LOO-PIT values are conditional cumulative predictive probabilities given LOO predictive distributions and corresponding left out observations. For a well-calibrated model, LOO-PIT values should be near uniformly distributed, but in the finite sample case they are not independent, due to LOO predictive distributions being determined by nearly the same data (all but one observation). We prove that this dependency is non-negligible in the finite case and depends on model complexity. We propose three testing procedures that can be used for continuous and discrete dependent uniform values. We also propose an automated graphical method for visualizing local departures from the null. Extensive numerical experiments on simulated and real datasets demonstrate that the proposed tests achieve competitive performance overall and have much higher power than standard uniformity tests based on the independence assumption that inevitably lead to lower than expected rejection rate.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='stat.ME'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='stat.CO'/>\n    <published>2026-03-03T12:34:52Z</published>\n    <arxiv:comment>30 pages</arxiv:comment>\n    <arxiv:primary_category term='stat.ME'/>\n    <author>\n      <name>Herman Tesso</name>\n    </author>\n    <author>\n      <name>Aki Vehtari</name>\n    </author>\n  </entry>"
}