Research

Paper

TESTING March 11, 2026

Backdoor Directions in Vision Transformers

Authors

Sengim Karayalcin, Marina Krcek, Pin-Yu Chen, Stjepan Picek

Abstract

This paper investigates how Backdoor Attacks are represented within Vision Transformers (ViTs). By assuming knowledge of the trigger, we identify a specific ``trigger direction'' in the model's activations that corresponds to the internal representation of the trigger. We confirm the causal role of this linear direction by showing that interventions in both activation and parameter space consistently modulate the model's backdoor behavior across multiple datasets and attack types. Using this direction as a diagnostic tool, we trace how backdoor features are processed across layers. Our analysis reveals distinct qualitative differences: static-patch triggers follow a different internal logic than stealthy, distributed triggers. We further examine the link between backdoors and adversarial attacks, specifically testing whether PGD-based perturbations (de-)activate the identified trigger mechanism. Finally, we propose a data-free, weight-based detection scheme for stealthy-trigger attacks. Our findings show that mechanistic interpretability offers a robust framework for diagnosing and addressing security vulnerabilities in computer vision.

Metadata

arXiv ID: 2603.10806
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-11
Fetched: 2026-03-12 04:21

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.10806v1</id>\n    <title>Backdoor Directions in Vision Transformers</title>\n    <updated>2026-03-11T14:13:48Z</updated>\n    <link href='https://arxiv.org/abs/2603.10806v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.10806v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>This paper investigates how Backdoor Attacks are represented within Vision Transformers (ViTs). By assuming knowledge of the trigger, we identify a specific ``trigger direction'' in the model's activations that corresponds to the internal representation of the trigger. We confirm the causal role of this linear direction by showing that interventions in both activation and parameter space consistently modulate the model's backdoor behavior across multiple datasets and attack types. Using this direction as a diagnostic tool, we trace how backdoor features are processed across layers. Our analysis reveals distinct qualitative differences: static-patch triggers follow a different internal logic than stealthy, distributed triggers. We further examine the link between backdoors and adversarial attacks, specifically testing whether PGD-based perturbations (de-)activate the identified trigger mechanism. Finally, we propose a data-free, weight-based detection scheme for stealthy-trigger attacks. Our findings show that mechanistic interpretability offers a robust framework for diagnosing and addressing security vulnerabilities in computer vision.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CR'/>\n    <published>2026-03-11T14:13:48Z</published>\n    <arxiv:comment>31 pages, 16 figures</arxiv:comment>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Sengim Karayalcin</name>\n    </author>\n    <author>\n      <name>Marina Krcek</name>\n    </author>\n    <author>\n      <name>Pin-Yu Chen</name>\n    </author>\n    <author>\n      <name>Stjepan Picek</name>\n    </author>\n  </entry>"
}