Research

Paper

AI LLM March 17, 2026

Breaking the Chain: A Causal Analysis of LLM Faithfulness to Intermediate Structures

Authors

Oleg Somov, Mikhail Chaichuk, Mikhail Seleznyov, Alexander Panchenko, Elena Tutubalina

Abstract

Schema-guided reasoning pipelines ask LLMs to produce explicit intermediate structures -- rubrics, checklists, verification queries -- before committing to a final decision. But do these structures causally determine the output, or merely accompany it? We introduce a causal evaluation protocol that makes this directly measurable: by selecting tasks where a deterministic function maps intermediate structures to decisions, every controlled edit implies a unique correct output. Across eight models and three benchmarks, models appear self-consistent with their own intermediate structures but fail to update predictions after intervention in up to 60% of cases -- revealing that apparent faithfulness is fragile once the intermediate structure changes. When derivation of the final decision from the structure is delegated to an external tool, this fragility largely disappears; however, prompts which ask to prioritize the intermediate structure over the original input do not materially close the gap. Overall, intermediate structures in schema-guided pipelines function as influential context rather than stable causal mediators.

Metadata

arXiv ID: 2603.16475
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-03-17
Fetched: 2026-03-18 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.16475v1</id>\n    <title>Breaking the Chain: A Causal Analysis of LLM Faithfulness to Intermediate Structures</title>\n    <updated>2026-03-17T13:01:44Z</updated>\n    <link href='https://arxiv.org/abs/2603.16475v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.16475v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Schema-guided reasoning pipelines ask LLMs to produce explicit intermediate structures -- rubrics, checklists, verification queries -- before committing to a final decision. But do these structures causally determine the output, or merely accompany it? We introduce a causal evaluation protocol that makes this directly measurable: by selecting tasks where a deterministic function maps intermediate structures to decisions, every controlled edit implies a unique correct output. Across eight models and three benchmarks, models appear self-consistent with their own intermediate structures but fail to update predictions after intervention in up to 60% of cases -- revealing that apparent faithfulness is fragile once the intermediate structure changes. When derivation of the final decision from the structure is delegated to an external tool, this fragility largely disappears; however, prompts which ask to prioritize the intermediate structure over the original input do not materially close the gap. Overall, intermediate structures in schema-guided pipelines function as influential context rather than stable causal mediators.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-17T13:01:44Z</published>\n    <arxiv:comment>17 pages, 4 figures, 5 tables</arxiv:comment>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Oleg Somov</name>\n    </author>\n    <author>\n      <name>Mikhail Chaichuk</name>\n    </author>\n    <author>\n      <name>Mikhail Seleznyov</name>\n    </author>\n    <author>\n      <name>Alexander Panchenko</name>\n    </author>\n    <author>\n      <name>Elena Tutubalina</name>\n    </author>\n  </entry>"
}