Research

Paper

AI LLM February 24, 2026

When can we trust untrusted monitoring? A safety case sketch across collusion strategies

Authors

Nelson Gardner-Challis, Jonathan Bostock, Georgiy Kozhevnikov, Morgan Sinclaire, Joan Velja, Alessandro Abate, Charlie Griffin

Abstract

AIs are increasingly being deployed with greater autonomy and capabilities, which increases the risk that a misaligned AI may be able to cause catastrophic harm. Untrusted monitoring -- using one untrusted model to oversee another -- is one approach to reducing risk. Justifying the safety of an untrusted monitoring deployment is challenging because developers cannot safely deploy a misaligned model to test their protocol directly. In this paper, we develop upon existing methods for rigorously demonstrating safety based on pre-deployment testing. We relax assumptions that previous AI control research made about the collusion strategies a misaligned AI might use to subvert untrusted monitoring. We develop a taxonomy covering passive self-recognition, causal collusion (hiding pre-shared signals), acausal collusion (hiding signals via Schelling points), and combined strategies. We create a safety case sketch to clearly present our argument, explicitly state our assumptions, and highlight unsolved challenges. We identify conditions under which passive self-recognition could be a more effective collusion strategy than those studied previously. Our work builds towards more robust evaluations of untrusted monitoring.

Metadata

arXiv ID: 2602.20628
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-02-24
Fetched: 2026-02-25 06:05

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.20628v1</id>\n    <title>When can we trust untrusted monitoring? A safety case sketch across collusion strategies</title>\n    <updated>2026-02-24T07:23:21Z</updated>\n    <link href='https://arxiv.org/abs/2602.20628v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.20628v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>AIs are increasingly being deployed with greater autonomy and capabilities, which increases the risk that a misaligned AI may be able to cause catastrophic harm. Untrusted monitoring -- using one untrusted model to oversee another -- is one approach to reducing risk. Justifying the safety of an untrusted monitoring deployment is challenging because developers cannot safely deploy a misaligned model to test their protocol directly. In this paper, we develop upon existing methods for rigorously demonstrating safety based on pre-deployment testing. We relax assumptions that previous AI control research made about the collusion strategies a misaligned AI might use to subvert untrusted monitoring. We develop a taxonomy covering passive self-recognition, causal collusion (hiding pre-shared signals), acausal collusion (hiding signals via Schelling points), and combined strategies. We create a safety case sketch to clearly present our argument, explicitly state our assumptions, and highlight unsolved challenges. We identify conditions under which passive self-recognition could be a more effective collusion strategy than those studied previously. Our work builds towards more robust evaluations of untrusted monitoring.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-02-24T07:23:21Z</published>\n    <arxiv:comment>66 pages, 14 figures, Preprint</arxiv:comment>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Nelson Gardner-Challis</name>\n    </author>\n    <author>\n      <name>Jonathan Bostock</name>\n    </author>\n    <author>\n      <name>Georgiy Kozhevnikov</name>\n    </author>\n    <author>\n      <name>Morgan Sinclaire</name>\n    </author>\n    <author>\n      <name>Joan Velja</name>\n    </author>\n    <author>\n      <name>Alessandro Abate</name>\n    </author>\n    <author>\n      <name>Charlie Griffin</name>\n    </author>\n  </entry>"
}