Research

Paper

TESTING February 23, 2026

CREDIT: Certified Ownership Verification of Deep Neural Networks Against Model Extraction Attacks

Authors

Bolin Shen, Zhan Cheng, Neil Zhenqiang Gong, Fan Yao, Yushun Dong

Abstract

Machine Learning as a Service (MLaaS) has emerged as a widely adopted paradigm for providing access to deep neural network (DNN) models, enabling users to conveniently leverage these models through standardized APIs. However, such services are highly vulnerable to Model Extraction Attacks (MEAs), where an adversary repeatedly queries a target model to collect input-output pairs and uses them to train a surrogate model that closely replicates its functionality. While numerous defense strategies have been proposed, verifying the ownership of a suspicious model with strict theoretical guarantees remains a challenging task. To address this gap, we introduce CREDIT, a certified ownership verification against MEAs. Specifically, we employ mutual information to quantify the similarity between DNN models, propose a practical verification threshold, and provide rigorous theoretical guarantees for ownership verification based on this threshold. We extensively evaluate our approach on several mainstream datasets across different domains and tasks, achieving state-of-the-art performance. Our implementation is publicly available at: https://github.com/LabRAI/CREDIT.

Metadata

arXiv ID: 2602.20419
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-02-23
Fetched: 2026-02-25 06:05

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.20419v1</id>\n    <title>CREDIT: Certified Ownership Verification of Deep Neural Networks Against Model Extraction Attacks</title>\n    <updated>2026-02-23T23:36:25Z</updated>\n    <link href='https://arxiv.org/abs/2602.20419v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.20419v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Machine Learning as a Service (MLaaS) has emerged as a widely adopted paradigm for providing access to deep neural network (DNN) models, enabling users to conveniently leverage these models through standardized APIs. However, such services are highly vulnerable to Model Extraction Attacks (MEAs), where an adversary repeatedly queries a target model to collect input-output pairs and uses them to train a surrogate model that closely replicates its functionality. While numerous defense strategies have been proposed, verifying the ownership of a suspicious model with strict theoretical guarantees remains a challenging task. To address this gap, we introduce CREDIT, a certified ownership verification against MEAs. Specifically, we employ mutual information to quantify the similarity between DNN models, propose a practical verification threshold, and provide rigorous theoretical guarantees for ownership verification based on this threshold. We extensively evaluate our approach on several mainstream datasets across different domains and tasks, achieving state-of-the-art performance. Our implementation is publicly available at: https://github.com/LabRAI/CREDIT.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-02-23T23:36:25Z</published>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Bolin Shen</name>\n    </author>\n    <author>\n      <name>Zhan Cheng</name>\n    </author>\n    <author>\n      <name>Neil Zhenqiang Gong</name>\n    </author>\n    <author>\n      <name>Fan Yao</name>\n    </author>\n    <author>\n      <name>Yushun Dong</name>\n    </author>\n  </entry>"
}