Research

Paper

TESTING March 24, 2026

Post-Selection Distributional Model Evaluation

Authors

Amirmohammad Farzaneh, Osvaldo Simeone

Abstract

Formal model evaluation methods typically certify that a model satisfies a prescribed target key performance indicator (KPI) level. However, in many applications, the relevant target KPI level may not be known a priori, and the user may instead wish to compare candidate models by analyzing the full trade-offs between performance and reliability achievable at test time by the models. This task, requiring the reliable estimate of the test-time KPI distributions, is made more complicated by the fact that the same data must often be used both to pre-select a subset of candidate models and to estimate their KPI distributions, causing a potential post-selection bias. In this work, we introduce post-selection distributional model evaluation (PS-DME), a general framework for statistically valid distributional model assessment after arbitrary data-dependent model pre-selection. Building on e-values, PS-DME controls post-selection false coverage rate (FCR) for the distributional KPI estimates and is proved to be more sample efficient than a baseline method based on sample splitting. Experiments on synthetic data, text-to-SQL decoding with large language models, and telecom network performance evaluation demonstrate that PS-DME enables reliable comparison of candidate configurations across a range of reliability levels, supporting the statistically reliable exploration of performance--reliability trade-offs.

Metadata

arXiv ID: 2603.23055
Provider: ARXIV
Primary Category: stat.ML
Published: 2026-03-24
Fetched: 2026-03-25 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.23055v1</id>\n    <title>Post-Selection Distributional Model Evaluation</title>\n    <updated>2026-03-24T10:51:30Z</updated>\n    <link href='https://arxiv.org/abs/2603.23055v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.23055v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Formal model evaluation methods typically certify that a model satisfies a prescribed target key performance indicator (KPI) level. However, in many applications, the relevant target KPI level may not be known a priori, and the user may instead wish to compare candidate models by analyzing the full trade-offs between performance and reliability achievable at test time by the models. This task, requiring the reliable estimate of the test-time KPI distributions, is made more complicated by the fact that the same data must often be used both to pre-select a subset of candidate models and to estimate their KPI distributions, causing a potential post-selection bias. In this work, we introduce post-selection distributional model evaluation (PS-DME), a general framework for statistically valid distributional model assessment after arbitrary data-dependent model pre-selection. Building on e-values, PS-DME controls post-selection false coverage rate (FCR) for the distributional KPI estimates and is proved to be more sample efficient than a baseline method based on sample splitting. Experiments on synthetic data, text-to-SQL decoding with large language models, and telecom network performance evaluation demonstrate that PS-DME enables reliable comparison of candidate configurations across a range of reliability levels, supporting the statistically reliable exploration of performance--reliability trade-offs.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='stat.ML'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.IT'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-03-24T10:51:30Z</published>\n    <arxiv:primary_category term='stat.ML'/>\n    <author>\n      <name>Amirmohammad Farzaneh</name>\n    </author>\n    <author>\n      <name>Osvaldo Simeone</name>\n    </author>\n  </entry>"
}