Research

Paper

TESTING February 27, 2026

Who Guards the Guardians? The Challenges of Evaluating Identifiability of Learned Representations

Authors

Shruti Joshi, Théo Saulus, Wieland Brendel, Philippe Brouillard, Dhanya Sridhar, Patrik Reizinger

Abstract

Identifiability in representation learning is commonly evaluated using standard metrics (e.g., MCC, DCI, R^2) on synthetic benchmarks with known ground-truth factors. These metrics are assumed to reflect recovery up to the equivalence class guaranteed by identifiability theory. We show that this assumption holds only under specific structural conditions: each metric implicitly encodes assumptions about both the data-generating process (DGP) and the encoder. When these assumptions are violated, metrics become misspecified and can produce systematic false positives and false negatives. Such failures occur both within classical identifiability regimes and in post-hoc settings where identifiability is most needed. We introduce a taxonomy separating DGP assumptions from encoder geometry, use it to characterise the validity domains of existing metrics, and release an evaluation suite for reproducible stress testing and comparison.

Metadata

arXiv ID: 2602.24278
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-02-27
Fetched: 2026-03-02 06:04

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.24278v1</id>\n    <title>Who Guards the Guardians? The Challenges of Evaluating Identifiability of Learned Representations</title>\n    <updated>2026-02-27T18:50:13Z</updated>\n    <link href='https://arxiv.org/abs/2602.24278v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.24278v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Identifiability in representation learning is commonly evaluated using standard metrics (e.g., MCC, DCI, R^2) on synthetic benchmarks with known ground-truth factors. These metrics are assumed to reflect recovery up to the equivalence class guaranteed by identifiability theory. We show that this assumption holds only under specific structural conditions: each metric implicitly encodes assumptions about both the data-generating process (DGP) and the encoder. When these assumptions are violated, metrics become misspecified and can produce systematic false positives and false negatives. Such failures occur both within classical identifiability regimes and in post-hoc settings where identifiability is most needed. We introduce a taxonomy separating DGP assumptions from encoder geometry, use it to characterise the validity domains of existing metrics, and release an evaluation suite for reproducible stress testing and comparison.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-02-27T18:50:13Z</published>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Shruti Joshi</name>\n    </author>\n    <author>\n      <name>Théo Saulus</name>\n    </author>\n    <author>\n      <name>Wieland Brendel</name>\n    </author>\n    <author>\n      <name>Philippe Brouillard</name>\n    </author>\n    <author>\n      <name>Dhanya Sridhar</name>\n    </author>\n    <author>\n      <name>Patrik Reizinger</name>\n    </author>\n  </entry>"
}