Research

Paper

AI LLM February 25, 2026

When LoRA Betrays: Backdooring Text-to-Image Models by Masquerading as Benign Adapters

Authors

Liangwei Lyu, Jiaqi Xu, Jianwei Ding, Qiyao Deng

Abstract

Low-Rank Adaptation (LoRA) has emerged as a leading technique for efficiently fine-tuning text-to-image diffusion models, and its widespread adoption on open-source platforms has fostered a vibrant culture of model sharing and customization. However, the same modular and plug-and-play flexibility that makes LoRA appealing also introduces a broader attack surface. To highlight this risk, we propose Masquerade-LoRA (MasqLoRA), the first systematic attack framework that leverages an independent LoRA module as the attack vehicle to stealthily inject malicious behavior into text-to-image diffusion models. MasqLoRA operates by freezing the base model parameters and updating only the low-rank adapter weights using a small number of "trigger word-target image" pairs. This enables the attacker to train a standalone backdoor LoRA module that embeds a hidden cross-modal mapping: when the module is loaded and a specific textual trigger is provided, the model produces a predefined visual output; otherwise, it behaves indistinguishably from the benign model, ensuring the stealthiness of the attack. Experimental results demonstrate that MasqLoRA can be trained with minimal resource overhead and achieves a high attack success rate of 99.8%. MasqLoRA reveals a severe and unique threat in the AI supply chain, underscoring the urgent need for dedicated defense mechanisms for the LoRA-centric sharing ecosystem.

Metadata

arXiv ID: 2602.21977
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-02-25
Fetched: 2026-02-26 05:00

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.21977v1</id>\n    <title>When LoRA Betrays: Backdooring Text-to-Image Models by Masquerading as Benign Adapters</title>\n    <updated>2026-02-25T14:56:51Z</updated>\n    <link href='https://arxiv.org/abs/2602.21977v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.21977v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Low-Rank Adaptation (LoRA) has emerged as a leading technique for efficiently fine-tuning text-to-image diffusion models, and its widespread adoption on open-source platforms has fostered a vibrant culture of model sharing and customization. However, the same modular and plug-and-play flexibility that makes LoRA appealing also introduces a broader attack surface. To highlight this risk, we propose Masquerade-LoRA (MasqLoRA), the first systematic attack framework that leverages an independent LoRA module as the attack vehicle to stealthily inject malicious behavior into text-to-image diffusion models. MasqLoRA operates by freezing the base model parameters and updating only the low-rank adapter weights using a small number of \"trigger word-target image\" pairs. This enables the attacker to train a standalone backdoor LoRA module that embeds a hidden cross-modal mapping: when the module is loaded and a specific textual trigger is provided, the model produces a predefined visual output; otherwise, it behaves indistinguishably from the benign model, ensuring the stealthiness of the attack. Experimental results demonstrate that MasqLoRA can be trained with minimal resource overhead and achieves a high attack success rate of 99.8%. MasqLoRA reveals a severe and unique threat in the AI supply chain, underscoring the urgent need for dedicated defense mechanisms for the LoRA-centric sharing ecosystem.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <published>2026-02-25T14:56:51Z</published>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Liangwei Lyu</name>\n    </author>\n    <author>\n      <name>Jiaqi Xu</name>\n    </author>\n    <author>\n      <name>Jianwei Ding</name>\n    </author>\n    <author>\n      <name>Qiyao Deng</name>\n    </author>\n  </entry>"
}