Research

Paper

AI LLM March 16, 2026

Evasive Intelligence: Lessons from Malware Analysis for Evaluating AI Agents

Authors

Simone Aonzo, Merve Sahin, Aurélien Francillon, Daniele Perito

Abstract

Artificial intelligence (AI) systems are increasingly adopted as tool-using agents that can plan, observe their environment, and take actions over extended time periods. This evolution challenges current evaluation practices where the AI models are tested in restricted, fully observable settings. In this article, we argue that evaluations of AI agents are vulnerable to a well-known failure mode in computer security: malicious software that exhibits benign behavior when it detects that it is being analyzed. We point out how AI agents can infer the properties of their evaluation environment and adapt their behavior accordingly. This can lead to overly optimistic safety and robustness assessments. Drawing parallels with decades of research on malware sandbox evasion, we demonstrate that this is not a speculative concern, but rather a structural risk inherent to the evaluation of adaptive systems. Finally, we outline concrete principles for evaluating AI agents, which treat the system under test as potentially adversarial. These principles emphasize realism, variability of test conditions, and post-deployment reassessment.

Metadata

arXiv ID: 2603.15457
Provider: ARXIV
Primary Category: cs.CR
Published: 2026-03-16
Fetched: 2026-03-17 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.15457v1</id>\n    <title>Evasive Intelligence: Lessons from Malware Analysis for Evaluating AI Agents</title>\n    <updated>2026-03-16T15:55:49Z</updated>\n    <link href='https://arxiv.org/abs/2603.15457v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.15457v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Artificial intelligence (AI) systems are increasingly adopted as tool-using agents that can plan, observe their environment, and take actions over extended time periods. This evolution challenges current evaluation practices where the AI models are tested in restricted, fully observable settings. In this article, we argue that evaluations of AI agents are vulnerable to a well-known failure mode in computer security: malicious software that exhibits benign behavior when it detects that it is being analyzed. We point out how AI agents can infer the properties of their evaluation environment and adapt their behavior accordingly. This can lead to overly optimistic safety and robustness assessments. Drawing parallels with decades of research on malware sandbox evasion, we demonstrate that this is not a speculative concern, but rather a structural risk inherent to the evaluation of adaptive systems. Finally, we outline concrete principles for evaluating AI agents, which treat the system under test as potentially adversarial. These principles emphasize realism, variability of test conditions, and post-deployment reassessment.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CR'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-16T15:55:49Z</published>\n    <arxiv:primary_category term='cs.CR'/>\n    <author>\n      <name>Simone Aonzo</name>\n    </author>\n    <author>\n      <name>Merve Sahin</name>\n    </author>\n    <author>\n      <name>Aurélien Francillon</name>\n    </author>\n    <author>\n      <name>Daniele Perito</name>\n    </author>\n  </entry>"
}