Research

Paper

AI LLM March 02, 2026

Emerging Human-like Strategies for Semantic Memory Foraging in Large Language Models

Authors

Eric Lacosse, Mariana Duarte, Peter M. Todd, Daniel C. McNamee

Abstract

Both humans and Large Language Models (LLMs) store a vast repository of semantic memories. In humans, efficient and strategic access to this memory store is a critical foundation for a variety of cognitive functions. Such access has long been a focus of psychology and the computational mechanisms behind it are now well characterized. Much of this understanding has been gleaned from a widely-used neuropsychological and cognitive science assessment called the Semantic Fluency Task (SFT), which requires the generation of as many semantically constrained concepts as possible. Our goal is to apply mechanistic interpretability techniques to bring greater rigor to the study of semantic memory foraging in LLMs. To this end, we present preliminary results examining SFT as a case study. A central focus is on convergent and divergent patterns of generative memory search, which in humans play complementary strategic roles in efficient memory foraging. We show that these same behavioral signatures, critical to human performance on the SFT, also emerge as identifiable patterns in LLMs across distinct layers. Potentially, this analysis provides new insights into how LLMs may be adapted into closer cognitive alignment with humans, or alternatively, guided toward productive cognitive \emph{disalignment} to enhance complementary strengths in human-AI interaction.

Metadata

arXiv ID: 2603.01822
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-03-02
Fetched: 2026-03-03 04:34

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.01822v1</id>\n    <title>Emerging Human-like Strategies for Semantic Memory Foraging in Large Language Models</title>\n    <updated>2026-03-02T12:55:51Z</updated>\n    <link href='https://arxiv.org/abs/2603.01822v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.01822v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Both humans and Large Language Models (LLMs) store a vast repository of semantic memories. In humans, efficient and strategic access to this memory store is a critical foundation for a variety of cognitive functions. Such access has long been a focus of psychology and the computational mechanisms behind it are now well characterized. Much of this understanding has been gleaned from a widely-used neuropsychological and cognitive science assessment called the Semantic Fluency Task (SFT), which requires the generation of as many semantically constrained concepts as possible. Our goal is to apply mechanistic interpretability techniques to bring greater rigor to the study of semantic memory foraging in LLMs. To this end, we present preliminary results examining SFT as a case study. A central focus is on convergent and divergent patterns of generative memory search, which in humans play complementary strategic roles in efficient memory foraging. We show that these same behavioral signatures, critical to human performance on the SFT, also emerge as identifiable patterns in LLMs across distinct layers. Potentially, this analysis provides new insights into how LLMs may be adapted into closer cognitive alignment with humans, or alternatively, guided toward productive cognitive \\emph{disalignment} to enhance complementary strengths in human-AI interaction.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-02T12:55:51Z</published>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Eric Lacosse</name>\n    </author>\n    <author>\n      <name>Mariana Duarte</name>\n    </author>\n    <author>\n      <name>Peter M. Todd</name>\n    </author>\n    <author>\n      <name>Daniel C. McNamee</name>\n    </author>\n  </entry>"
}