Research

Paper

AI LLM February 25, 2026

Understanding Artificial Theory of Mind: Perturbed Tasks and Reasoning in Large Language Models

Authors

Christian Nickel, Laura Schrewe, Florian Mai, Lucie Flek

Abstract

Theory of Mind (ToM) refers to an agent's ability to model the internal states of others. Contributing to the debate whether large language models (LLMs) exhibit genuine ToM capabilities, our study investigates their ToM robustness using perturbations on false-belief tasks and examines the potential of Chain-of-Thought prompting (CoT) to enhance performance and explain the LLM's decision. We introduce a handcrafted, richly annotated ToM dataset, including classic and perturbed false belief tasks, the corresponding spaces of valid reasoning chains for correct task completion, subsequent reasoning faithfulness, task solutions, and propose metrics to evaluate reasoning chain correctness and to what extent final answers are faithful to reasoning traces of the generated CoT. We show a steep drop in ToM capabilities under task perturbation for all evaluated LLMs, questioning the notion of any robust form of ToM being present. While CoT prompting improves the ToM performance overall in a faithful manner, it surprisingly degrades accuracy for some perturbation classes, indicating that selective application is necessary.

Metadata

arXiv ID: 2602.22072
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-02-25
Fetched: 2026-02-26 05:00

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.22072v1</id>\n    <title>Understanding Artificial Theory of Mind: Perturbed Tasks and Reasoning in Large Language Models</title>\n    <updated>2026-02-25T16:24:35Z</updated>\n    <link href='https://arxiv.org/abs/2602.22072v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.22072v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Theory of Mind (ToM) refers to an agent's ability to model the internal states of others. Contributing to the debate whether large language models (LLMs) exhibit genuine ToM capabilities, our study investigates their ToM robustness using perturbations on false-belief tasks and examines the potential of Chain-of-Thought prompting (CoT) to enhance performance and explain the LLM's decision. We introduce a handcrafted, richly annotated ToM dataset, including classic and perturbed false belief tasks, the corresponding spaces of valid reasoning chains for correct task completion, subsequent reasoning faithfulness, task solutions, and propose metrics to evaluate reasoning chain correctness and to what extent final answers are faithful to reasoning traces of the generated CoT. We show a steep drop in ToM capabilities under task perturbation for all evaluated LLMs, questioning the notion of any robust form of ToM being present. While CoT prompting improves the ToM performance overall in a faithful manner, it surprisingly degrades accuracy for some perturbation classes, indicating that selective application is necessary.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-02-25T16:24:35Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Christian Nickel</name>\n    </author>\n    <author>\n      <name>Laura Schrewe</name>\n    </author>\n    <author>\n      <name>Florian Mai</name>\n    </author>\n    <author>\n      <name>Lucie Flek</name>\n    </author>\n  </entry>"
}