Research

Paper

AI LLM March 11, 2026

Causal Concept Graphs in LLM Latent Space for Stepwise Reasoning

Authors

Md Muntaqim Meherab, Noor Islam S. Mohammad, Faiza Feroz

Abstract

Sparse autoencoders can localize where concepts live in language models, but not how they interact during multi-step reasoning. We propose Causal Concept Graphs (CCG): a directed acyclic graph over sparse, interpretable latent features, where edges capture learned causal dependencies between concepts. We combine task-conditioned sparse autoencoders for concept discovery with DAGMA-style differentiable structure learning for graph recovery and introduce the Causal Fidelity Score (CFS) to evaluate whether graph-guided interventions induce larger downstream effects than random ones. On ARC-Challenge, StrategyQA, and LogiQA with GPT-2 Medium, across five seeds ($n{=}15$ paired runs), CCG achieves $\CFS=5.654\pm0.625$, outperforming ROME-style tracing ($3.382\pm0.233$), SAE-only ranking ($2.479\pm0.196$), and a random baseline ($1.032\pm0.034$), with $p<0.0001$ after Bonferroni correction. Learned graphs are sparse (5-6\% edge density), domain-specific, and stable across seeds.

Metadata

arXiv ID: 2603.10377
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-11
Fetched: 2026-03-12 04:21

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.10377v1</id>\n    <title>Causal Concept Graphs in LLM Latent Space for Stepwise Reasoning</title>\n    <updated>2026-03-11T03:46:38Z</updated>\n    <link href='https://arxiv.org/abs/2603.10377v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.10377v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Sparse autoencoders can localize where concepts live in language models, but not how they interact during multi-step reasoning. We propose Causal Concept Graphs (CCG): a directed acyclic graph over sparse, interpretable latent features, where edges capture learned causal dependencies between concepts. We combine task-conditioned sparse autoencoders for concept discovery with DAGMA-style differentiable structure learning for graph recovery and introduce the Causal Fidelity Score (CFS) to evaluate whether graph-guided interventions induce larger downstream effects than random ones. On ARC-Challenge, StrategyQA, and LogiQA with GPT-2 Medium, across five seeds ($n{=}15$ paired runs), CCG achieves $\\CFS=5.654\\pm0.625$, outperforming ROME-style tracing ($3.382\\pm0.233$), SAE-only ranking ($2.479\\pm0.196$), and a random baseline ($1.032\\pm0.034$), with $p&lt;0.0001$ after Bonferroni correction. Learned graphs are sparse (5-6\\% edge density), domain-specific, and stable across seeds.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='stat.ME'/>\n    <published>2026-03-11T03:46:38Z</published>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Md Muntaqim Meherab</name>\n    </author>\n    <author>\n      <name>Noor Islam S. Mohammad</name>\n    </author>\n    <author>\n      <name>Faiza Feroz</name>\n    </author>\n  </entry>"
}