Research

Paper

AI LLM February 26, 2026

Devling into Adversarial Transferability on Image Classification: Review, Benchmark, and Evaluation

Authors

Xiaosen Wang, Zhijin Ge, Bohan Liu, Zheng Fang, Fengfan Zhou, Ruixuan Zhang, Shaokang Wang, Yuyang Luo

Abstract

Adversarial transferability refers to the capacity of adversarial examples generated on the surrogate model to deceive alternate, unexposed victim models. This property eliminates the need for direct access to the victim model during an attack, thereby raising considerable security concerns in practical applications and attracting substantial research attention recently. In this work, we discern a lack of a standardized framework and criteria for evaluating transfer-based attacks, leading to potentially biased assessments of existing approaches. To rectify this gap, we have conducted an exhaustive review of hundreds of related works, organizing various transfer-based attacks into six distinct categories. Subsequently, we propose a comprehensive framework designed to serve as a benchmark for evaluating these attacks. In addition, we delineate common strategies that enhance adversarial transferability and highlight prevalent issues that could lead to unfair comparisons. Finally, we provide a brief review of transfer-based attacks beyond image classification.

Metadata

arXiv ID: 2602.23117
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-02-26
Fetched: 2026-02-27 04:35

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.23117v1</id>\n    <title>Devling into Adversarial Transferability on Image Classification: Review, Benchmark, and Evaluation</title>\n    <updated>2026-02-26T15:30:36Z</updated>\n    <link href='https://arxiv.org/abs/2602.23117v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.23117v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Adversarial transferability refers to the capacity of adversarial examples generated on the surrogate model to deceive alternate, unexposed victim models. This property eliminates the need for direct access to the victim model during an attack, thereby raising considerable security concerns in practical applications and attracting substantial research attention recently. In this work, we discern a lack of a standardized framework and criteria for evaluating transfer-based attacks, leading to potentially biased assessments of existing approaches. To rectify this gap, we have conducted an exhaustive review of hundreds of related works, organizing various transfer-based attacks into six distinct categories. Subsequently, we propose a comprehensive framework designed to serve as a benchmark for evaluating these attacks. In addition, we delineate common strategies that enhance adversarial transferability and highlight prevalent issues that could lead to unfair comparisons. Finally, we provide a brief review of transfer-based attacks beyond image classification.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-02-26T15:30:36Z</published>\n    <arxiv:comment>Code is available at https://github.com/Trustworthy-AI-Group/TransferAttack</arxiv:comment>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Xiaosen Wang</name>\n    </author>\n    <author>\n      <name>Zhijin Ge</name>\n    </author>\n    <author>\n      <name>Bohan Liu</name>\n    </author>\n    <author>\n      <name>Zheng Fang</name>\n    </author>\n    <author>\n      <name>Fengfan Zhou</name>\n    </author>\n    <author>\n      <name>Ruixuan Zhang</name>\n    </author>\n    <author>\n      <name>Shaokang Wang</name>\n    </author>\n    <author>\n      <name>Yuyang Luo</name>\n    </author>\n  </entry>"
}