Research

Paper

TESTING March 03, 2026

A Neuropsychologically Grounded Evaluation of LLM Cognitive Abilities

Authors

Faiz Ghifari Haznitrama, Faeyza Rishad Ardi, Alice Oh

Abstract

Large language models (LLMs) exhibit a unified "general factor" of capability across 10 benchmarks, a finding confirmed by our factor analysis of 156 models, yet they still struggle with simple, trivial tasks for humans. This is because current benchmarks focus on task completion, failing to probe the foundational cognitive abilities that highlight these behaviors. We address this by introducing the NeuroCognition benchmark, grounded in three adapted neuropsychological tests: Raven's Progressive Matrices (abstract relational reasoning), Spatial Working Memory (maintenance and systematic search), and the Wisconsin Card Sorting Test (cognitive flexibility). Our evaluation reveals that while models perform strongly on text, their performance degrades for images and with increased complexity. Furthermore, we observe that complex reasoning is not universally beneficial, whereas simple, human-like strategies yield partial gains. We also find that NeuroCognition correlates positively with standard general-capability benchmarks, while still measuring distinct cognitive abilities beyond them. Overall, NeuroCognition emphasizes where current LLMs align with human-like intelligence and where they lack core adaptive cognition, showing the potential to serve as a verifiable, scalable source for improving LLMs.

Metadata

arXiv ID: 2603.02540
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-03-03
Fetched: 2026-03-04 03:41

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.02540v1</id>\n    <title>A Neuropsychologically Grounded Evaluation of LLM Cognitive Abilities</title>\n    <updated>2026-03-03T02:54:58Z</updated>\n    <link href='https://arxiv.org/abs/2603.02540v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.02540v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large language models (LLMs) exhibit a unified \"general factor\" of capability across 10 benchmarks, a finding confirmed by our factor analysis of 156 models, yet they still struggle with simple, trivial tasks for humans. This is because current benchmarks focus on task completion, failing to probe the foundational cognitive abilities that highlight these behaviors. We address this by introducing the NeuroCognition benchmark, grounded in three adapted neuropsychological tests: Raven's Progressive Matrices (abstract relational reasoning), Spatial Working Memory (maintenance and systematic search), and the Wisconsin Card Sorting Test (cognitive flexibility). Our evaluation reveals that while models perform strongly on text, their performance degrades for images and with increased complexity. Furthermore, we observe that complex reasoning is not universally beneficial, whereas simple, human-like strategies yield partial gains. We also find that NeuroCognition correlates positively with standard general-capability benchmarks, while still measuring distinct cognitive abilities beyond them. Overall, NeuroCognition emphasizes where current LLMs align with human-like intelligence and where they lack core adaptive cognition, showing the potential to serve as a verifiable, scalable source for improving LLMs.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-03T02:54:58Z</published>\n    <arxiv:comment>26 pages, 2 figures, 16 tables</arxiv:comment>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Faiz Ghifari Haznitrama</name>\n    </author>\n    <author>\n      <name>Faeyza Rishad Ardi</name>\n    </author>\n    <author>\n      <name>Alice Oh</name>\n    </author>\n  </entry>"
}