Research

Paper

TESTING February 26, 2026

Interpreting and Steering State-Space Models via Activation Subspace Bottlenecks

Authors

Vamshi Sunku Mohan, Kaustubh Gupta, Aneesha Das, Chandan Singh

Abstract

State-space models (SSMs) have emerged as an efficient strategy for building powerful language models, avoiding the quadratic complexity of computing attention in transformers. Despite their promise, the interpretability and steerability of modern SSMs remain relatively underexplored. We take a major step in this direction by identifying activation subspace bottlenecks in the Mamba family of SSM models using tools from mechanistic interpretability. We then introduce a test-time steering intervention that simply multiplies the activations of the identified bottlenecks by a scalar. Across 5 SSMs and 6 diverse benchmarks, this intervention improves performance by an average of 8.27%, without requiring any task-specific tuning. Finally, we validate that the identified bottlenecks are indeed hindering performance by modifying them to yield an architecture we call Stable-Mamba, which achieves long-context performance gains when retrained from scratch.

Metadata

arXiv ID: 2602.22719
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-02-26
Fetched: 2026-02-27 04:35

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.22719v1</id>\n    <title>Interpreting and Steering State-Space Models via Activation Subspace Bottlenecks</title>\n    <updated>2026-02-26T07:46:42Z</updated>\n    <link href='https://arxiv.org/abs/2602.22719v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.22719v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>State-space models (SSMs) have emerged as an efficient strategy for building powerful language models, avoiding the quadratic complexity of computing attention in transformers. Despite their promise, the interpretability and steerability of modern SSMs remain relatively underexplored. We take a major step in this direction by identifying activation subspace bottlenecks in the Mamba family of SSM models using tools from mechanistic interpretability. We then introduce a test-time steering intervention that simply multiplies the activations of the identified bottlenecks by a scalar. Across 5 SSMs and 6 diverse benchmarks, this intervention improves performance by an average of 8.27%, without requiring any task-specific tuning. Finally, we validate that the identified bottlenecks are indeed hindering performance by modifying them to yield an architecture we call Stable-Mamba, which achieves long-context performance gains when retrained from scratch.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-02-26T07:46:42Z</published>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Vamshi Sunku Mohan</name>\n    </author>\n    <author>\n      <name>Kaustubh Gupta</name>\n    </author>\n    <author>\n      <name>Aneesha Das</name>\n    </author>\n    <author>\n      <name>Chandan Singh</name>\n    </author>\n  </entry>"
}