Research

Paper

AI LLM March 11, 2026

Attribution as Retrieval: Model-Agnostic AI-Generated Image Attribution

Authors

Hongsong Wang, Renxi Cheng, Chaolei Han, Jie Gui

Abstract

With the rapid advancement of AIGC technologies, image forensics will encounter unprecedented challenges. Traditional methods are incapable of dealing with increasingly realistic images generated by rapidly evolving image generation techniques. To facilitate the identification of AI-generated images and the attribution of their source models, generative image watermarking and AI-generated image attribution have emerged as key research focuses in recent years. However, existing methods are model-dependent, requiring access to the generative models and lacking generality and scalability to new and unseen generators. To address these limitations, this work presents a new paradigm for AI-generated image attribution by formulating it as an instance retrieval problem instead of a conventional image classification problem. We propose an efficient model-agnostic framework, called Low-bIt-plane-based Deepfake Attribution (LIDA). The input to LIDA is produced by Low-Bit Fingerprint Generation module, while the training involves Unsupervised Pre-Training followed by subsequent Few-Shot Attribution Adaptation. Comprehensive experiments demonstrate that LIDA achieves state-of-the-art performance for both Deepfake detection and image attribution under zero- and few-shot settings. The code is at https://github.com/hongsong-wang/LIDA

Metadata

arXiv ID: 2603.10583
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-11
Fetched: 2026-03-12 04:21

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.10583v1</id>\n    <title>Attribution as Retrieval: Model-Agnostic AI-Generated Image Attribution</title>\n    <updated>2026-03-11T09:40:00Z</updated>\n    <link href='https://arxiv.org/abs/2603.10583v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.10583v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>With the rapid advancement of AIGC technologies, image forensics will encounter unprecedented challenges. Traditional methods are incapable of dealing with increasingly realistic images generated by rapidly evolving image generation techniques. To facilitate the identification of AI-generated images and the attribution of their source models, generative image watermarking and AI-generated image attribution have emerged as key research focuses in recent years. However, existing methods are model-dependent, requiring access to the generative models and lacking generality and scalability to new and unseen generators. To address these limitations, this work presents a new paradigm for AI-generated image attribution by formulating it as an instance retrieval problem instead of a conventional image classification problem. We propose an efficient model-agnostic framework, called Low-bIt-plane-based Deepfake Attribution (LIDA). The input to LIDA is produced by Low-Bit Fingerprint Generation module, while the training involves Unsupervised Pre-Training followed by subsequent Few-Shot Attribution Adaptation. Comprehensive experiments demonstrate that LIDA achieves state-of-the-art performance for both Deepfake detection and image attribution under zero- and few-shot settings. The code is at https://github.com/hongsong-wang/LIDA</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <published>2026-03-11T09:40:00Z</published>\n    <arxiv:comment>To appear in CVPR 2026, Code is at https://github.com/hongsong-wang/LIDA</arxiv:comment>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Hongsong Wang</name>\n    </author>\n    <author>\n      <name>Renxi Cheng</name>\n    </author>\n    <author>\n      <name>Chaolei Han</name>\n    </author>\n    <author>\n      <name>Jie Gui</name>\n    </author>\n  </entry>"
}