Research

Paper

AI LLM March 16, 2026

Directional Embedding Smoothing for Robust Vision Language Models

Authors

Ye Wang, Jing Liu, Toshiaki Koike-Akino

Abstract

The safety and reliability of vision-language models (VLMs) are a crucial part of deploying trustworthy agentic AI systems. However, VLMs remain vulnerable to jailbreaking attacks that undermine their safety alignment to yield harmful outputs. In this work, we extend the Randomized Embedding Smoothing and Token Aggregation (RESTA) defense to VLMs and evaluate its performance against the JailBreakV-28K benchmark of multi-modal jailbreaking attacks. We find that RESTA is effective in reducing attack success rate over this diverse corpus of attacks, in particular, when employing directional embedding noise, where the injected noise is aligned with the original token embedding vectors. Our results demonstrate that RESTA can contribute to securing VLMs within agentic systems, as a lightweight, inference-time defense layer of an overall security framework.

Metadata

arXiv ID: 2603.15259
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-16
Fetched: 2026-03-17 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.15259v1</id>\n    <title>Directional Embedding Smoothing for Robust Vision Language Models</title>\n    <updated>2026-03-16T13:25:29Z</updated>\n    <link href='https://arxiv.org/abs/2603.15259v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.15259v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>The safety and reliability of vision-language models (VLMs) are a crucial part of deploying trustworthy agentic AI systems. However, VLMs remain vulnerable to jailbreaking attacks that undermine their safety alignment to yield harmful outputs. In this work, we extend the Randomized Embedding Smoothing and Token Aggregation (RESTA) defense to VLMs and evaluate its performance against the JailBreakV-28K benchmark of multi-modal jailbreaking attacks. We find that RESTA is effective in reducing attack success rate over this diverse corpus of attacks, in particular, when employing directional embedding noise, where the injected noise is aligned with the original token embedding vectors. Our results demonstrate that RESTA can contribute to securing VLMs within agentic systems, as a lightweight, inference-time defense layer of an overall security framework.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CR'/>\n    <published>2026-03-16T13:25:29Z</published>\n    <arxiv:comment>Accepted at ICLR 2026 Workshop on Agents in the Wild</arxiv:comment>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Ye Wang</name>\n    </author>\n    <author>\n      <name>Jing Liu</name>\n    </author>\n    <author>\n      <name>Toshiaki Koike-Akino</name>\n    </author>\n  </entry>"
}