Research

Paper

AI LLM March 13, 2026

Uncovering Security Threats and Architecting Defenses in Autonomous Agents: A Case Study of OpenClaw

Authors

Zonghao Ying, Xiao Yang, Siyang Wu, Yumeng Song, Yang Qu, Hainan Li, Tianlin Li, Jiakai Wang, Aishan Liu, Xianglong Liu

Abstract

The rapid evolution of Large Language Models (LLMs) into autonomous, tool-calling agents has fundamentally altered the cybersecurity landscape. Frameworks like OpenClaw grant AI systems operating-system-level permissions and the autonomy to execute complex workflows. This level of access creates unprecedented security challenges. Consequently, traditional content-filtering defenses have become obsolete. This report presents a comprehensive security analysis of the OpenClaw ecosystem. We systematically investigate its current threat landscape, highlighting critical vulnerabilities such as prompt injection-driven Remote Code Execution (RCE), sequential tool attack chains, context amnesia, and supply chain contamination. To systematically contextualize these threats, we propose a novel tri-layered risk taxonomy for autonomous Agents, categorizing vulnerabilities across AI Cognitive, Software Execution, and Information System dimensions. To address these systemic architectural flaws, we introduce the Full-Lifecycle Agent Security Architecture (FASA). This theoretical defense blueprint advocates for zero-trust agentic execution, dynamic intent verification, and cross-layer reasoning-action correlation. Building on this framework, we present Project ClawGuard, our ongoing engineering initiative. This project aims to implement the FASA paradigm and transition autonomous agents from high-risk experimental utilities into trustworthy systems. Our code and dataset are available at https://github.com/NY1024/ClawGuard.

Metadata

arXiv ID: 2603.12644
Provider: ARXIV
Primary Category: cs.CR
Published: 2026-03-13
Fetched: 2026-03-16 06:01

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.12644v1</id>\n    <title>Uncovering Security Threats and Architecting Defenses in Autonomous Agents: A Case Study of OpenClaw</title>\n    <updated>2026-03-13T04:33:05Z</updated>\n    <link href='https://arxiv.org/abs/2603.12644v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.12644v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>The rapid evolution of Large Language Models (LLMs) into autonomous, tool-calling agents has fundamentally altered the cybersecurity landscape. Frameworks like OpenClaw grant AI systems operating-system-level permissions and the autonomy to execute complex workflows. This level of access creates unprecedented security challenges. Consequently, traditional content-filtering defenses have become obsolete. This report presents a comprehensive security analysis of the OpenClaw ecosystem. We systematically investigate its current threat landscape, highlighting critical vulnerabilities such as prompt injection-driven Remote Code Execution (RCE), sequential tool attack chains, context amnesia, and supply chain contamination. To systematically contextualize these threats, we propose a novel tri-layered risk taxonomy for autonomous Agents, categorizing vulnerabilities across AI Cognitive, Software Execution, and Information System dimensions. To address these systemic architectural flaws, we introduce the Full-Lifecycle Agent Security Architecture (FASA). This theoretical defense blueprint advocates for zero-trust agentic execution, dynamic intent verification, and cross-layer reasoning-action correlation. Building on this framework, we present Project ClawGuard, our ongoing engineering initiative. This project aims to implement the FASA paradigm and transition autonomous agents from high-risk experimental utilities into trustworthy systems. Our code and dataset are available at https://github.com/NY1024/ClawGuard.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CR'/>\n    <published>2026-03-13T04:33:05Z</published>\n    <arxiv:primary_category term='cs.CR'/>\n    <author>\n      <name>Zonghao Ying</name>\n    </author>\n    <author>\n      <name>Xiao Yang</name>\n    </author>\n    <author>\n      <name>Siyang Wu</name>\n    </author>\n    <author>\n      <name>Yumeng Song</name>\n    </author>\n    <author>\n      <name>Yang Qu</name>\n    </author>\n    <author>\n      <name>Hainan Li</name>\n    </author>\n    <author>\n      <name>Tianlin Li</name>\n    </author>\n    <author>\n      <name>Jiakai Wang</name>\n    </author>\n    <author>\n      <name>Aishan Liu</name>\n    </author>\n    <author>\n      <name>Xianglong Liu</name>\n    </author>\n  </entry>"
}