Research

Paper

AI LLM March 05, 2026

Dissociating Direct Access from Inference in AI Introspection

Authors

Harvey Lederman, Kyle Mahowald

Abstract

Introspection is a foundational cognitive ability, but its mechanism is not well understood. Recent work has shown that AI models can introspect. We study their mechanism of introspection, first extensively replicating Lindsey et al. (2025)'s thought injection detection paradigm in large open-source models. We show that these models detect injected representations via two separable mechanisms: (i) probability-matching (inferring from perceived anomaly of the prompt) and (ii) direct access to internal states. The direct access mechanism is content-agnostic: models detect that an anomaly occurred but cannot reliably identify its semantic content. The two model classes we study confabulate injected concepts that are high-frequency and concrete (e.g., "apple'"); for them correct concept guesses typically require significantly more tokens. This content-agnostic introspective mechanism is consistent with leading theories in philosophy and psychology.

Metadata

arXiv ID: 2603.05414
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-03-05
Fetched: 2026-03-06 14:20

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.05414v1</id>\n    <title>Dissociating Direct Access from Inference in AI Introspection</title>\n    <updated>2026-03-05T17:39:37Z</updated>\n    <link href='https://arxiv.org/abs/2603.05414v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.05414v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Introspection is a foundational cognitive ability, but its mechanism is not well understood. Recent work has shown that AI models can introspect. We study their mechanism of introspection, first extensively replicating Lindsey et al. (2025)'s thought injection detection paradigm in large open-source models. We show that these models detect injected representations via two separable mechanisms: (i) probability-matching (inferring from perceived anomaly of the prompt) and (ii) direct access to internal states. The direct access mechanism is content-agnostic: models detect that an anomaly occurred but cannot reliably identify its semantic content. The two model classes we study confabulate injected concepts that are high-frequency and concrete (e.g., \"apple'\"); for them correct concept guesses typically require significantly more tokens. This content-agnostic introspective mechanism is consistent with leading theories in philosophy and psychology.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-05T17:39:37Z</published>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Harvey Lederman</name>\n    </author>\n    <author>\n      <name>Kyle Mahowald</name>\n    </author>\n  </entry>"
}