Research

Paper

TESTING March 24, 2026

Detecting Non-Membership in LLM Training Data via Rank Correlations

Authors

Pranav Shetty, Mirazul Haque, Zhiqiang Ma, Xiaomo Liu

Abstract

As large language models (LLMs) are trained on increasingly vast and opaque text corpora, determining which data contributed to training has become essential for copyright enforcement, compliance auditing, and user trust. While prior work focuses on detecting whether a dataset was used in training (membership inference), the complementary problem -- verifying that a dataset was not used -- has received little attention. We address this gap by introducing PRISM, a test that detects dataset-level non-membership using only grey-box access to model logits. Our key insight is that two models that have not seen a dataset exhibit higher rank correlation in their normalized token log probabilities than when one model has been trained on that data. Using this observation, we construct a correlation-based test that detects non-membership. Empirically, PRISM reliably rules out membership in training data across all datasets tested while avoiding false positives, thus offering a framework for verifying that specific datasets were excluded from LLM training.

Metadata

arXiv ID: 2603.22707
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-24
Fetched: 2026-03-25 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.22707v1</id>\n    <title>Detecting Non-Membership in LLM Training Data via Rank Correlations</title>\n    <updated>2026-03-24T01:59:18Z</updated>\n    <link href='https://arxiv.org/abs/2603.22707v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.22707v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>As large language models (LLMs) are trained on increasingly vast and opaque text corpora, determining which data contributed to training has become essential for copyright enforcement, compliance auditing, and user trust. While prior work focuses on detecting whether a dataset was used in training (membership inference), the complementary problem -- verifying that a dataset was not used -- has received little attention. We address this gap by introducing PRISM, a test that detects dataset-level non-membership using only grey-box access to model logits. Our key insight is that two models that have not seen a dataset exhibit higher rank correlation in their normalized token log probabilities than when one model has been trained on that data. Using this observation, we construct a correlation-based test that detects non-membership. Empirically, PRISM reliably rules out membership in training data across all datasets tested while avoiding false positives, thus offering a framework for verifying that specific datasets were excluded from LLM training.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-24T01:59:18Z</published>\n    <arxiv:comment>Accepted to EACL 2026 Main Conference</arxiv:comment>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Pranav Shetty</name>\n    </author>\n    <author>\n      <name>Mirazul Haque</name>\n    </author>\n    <author>\n      <name>Zhiqiang Ma</name>\n    </author>\n    <author>\n      <name>Xiaomo Liu</name>\n    </author>\n  </entry>"
}