Research

Paper

TESTING March 18, 2026

Interpretability without actionability: mechanistic methods cannot correct language model errors despite near-perfect internal representations

Authors

Sanjay Basu, Sadiq Y. Patel, Parth Sheth, Bhairavi Muralidharan, Namrata Elamaran, Aakriti Kinra, John Morgan, Rajaie Batniji

Abstract

Language models encode task-relevant knowledge in internal representations that far exceeds their output performance, but whether mechanistic interpretability methods can bridge this knowledge-action gap has not been systematically tested. We compared four mechanistic interpretability methods -- concept bottleneck steering (Steerling-8B), sparse autoencoder feature steering, logit lens with activation patching, and linear probing with truthfulness separator vector steering (Qwen 2.5 7B Instruct) -- for correcting false-negative triage errors using 400 physician-adjudicated clinical vignettes (144 hazards, 256 benign). Linear probes discriminated hazardous from benign cases with 98.2% AUROC, yet the model's output sensitivity was only 45.1%, a 53-percentage-point knowledge-action gap. Concept bottleneck steering corrected 20% of missed hazards but disrupted 53% of correct detections, indistinguishable from random perturbation (p=0.84). SAE feature steering produced zero effect despite 3,695 significant features. TSV steering at high strength corrected 24% of missed hazards while disrupting 6% of correct detections, but left 76% of errors uncorrected. Current mechanistic interpretability methods cannot reliably translate internal knowledge into corrected outputs, with implications for AI safety frameworks that assume interpretability enables effective error correction.

Metadata

arXiv ID: 2603.18353
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-03-18
Fetched: 2026-03-20 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.18353v1</id>\n    <title>Interpretability without actionability: mechanistic methods cannot correct language model errors despite near-perfect internal representations</title>\n    <updated>2026-03-18T23:31:05Z</updated>\n    <link href='https://arxiv.org/abs/2603.18353v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.18353v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Language models encode task-relevant knowledge in internal representations that far exceeds their output performance, but whether mechanistic interpretability methods can bridge this knowledge-action gap has not been systematically tested. We compared four mechanistic interpretability methods -- concept bottleneck steering (Steerling-8B), sparse autoencoder feature steering, logit lens with activation patching, and linear probing with truthfulness separator vector steering (Qwen 2.5 7B Instruct) -- for correcting false-negative triage errors using 400 physician-adjudicated clinical vignettes (144 hazards, 256 benign). Linear probes discriminated hazardous from benign cases with 98.2% AUROC, yet the model's output sensitivity was only 45.1%, a 53-percentage-point knowledge-action gap. Concept bottleneck steering corrected 20% of missed hazards but disrupted 53% of correct detections, indistinguishable from random perturbation (p=0.84). SAE feature steering produced zero effect despite 3,695 significant features. TSV steering at high strength corrected 24% of missed hazards while disrupting 6% of correct detections, but left 76% of errors uncorrected. Current mechanistic interpretability methods cannot reliably translate internal knowledge into corrected outputs, with implications for AI safety frameworks that assume interpretability enables effective error correction.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-18T23:31:05Z</published>\n    <arxiv:comment>27 pages, 5 figures, 10 tables. Code available at https://github.com/sanjaybasu/interpretability-triage</arxiv:comment>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Sanjay Basu</name>\n    </author>\n    <author>\n      <name>Sadiq Y. Patel</name>\n    </author>\n    <author>\n      <name>Parth Sheth</name>\n    </author>\n    <author>\n      <name>Bhairavi Muralidharan</name>\n    </author>\n    <author>\n      <name>Namrata Elamaran</name>\n    </author>\n    <author>\n      <name>Aakriti Kinra</name>\n    </author>\n    <author>\n      <name>John Morgan</name>\n    </author>\n    <author>\n      <name>Rajaie Batniji</name>\n    </author>\n  </entry>"
}