Research

Paper

AI LLM March 10, 2026

Common Sense vs. Morality: The Curious Case of Narrative Focus Bias in LLMs

Authors

Saugata Purkayastha, Pranav Kushare, Pragya Paramita Pal, Sukannya Purkayastha

Abstract

Large Language Models (LLMs) are increasingly deployed across diverse real-world applications and user communities. As such, it is crucial that these models remain both morally grounded and knowledge-aware. In this work, we uncover a critical limitation of current LLMs -- their tendency to prioritize moral reasoning over commonsense understanding. To investigate this phenomenon, we introduce CoMoral, a novel benchmark dataset containing commonsense contradictions embedded within moral dilemmas. Through extensive evaluation of ten LLMs across different model sizes, we find that existing models consistently struggle to identify such contradictions without prior signal. Furthermore, we observe a pervasive narrative focus bias, wherein LLMs more readily detect commonsense contradictions when they are attributed to a secondary character rather than the primary (narrator) character. Our comprehensive analysis underscores the need for enhanced reasoning-aware training to improve the commonsense robustness of large language models.

Metadata

arXiv ID: 2603.09434
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-10
Fetched: 2026-03-11 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.09434v1</id>\n    <title>Common Sense vs. Morality: The Curious Case of Narrative Focus Bias in LLMs</title>\n    <updated>2026-03-10T09:47:18Z</updated>\n    <link href='https://arxiv.org/abs/2603.09434v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.09434v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large Language Models (LLMs) are increasingly deployed across diverse real-world applications and user communities. As such, it is crucial that these models remain both morally grounded and knowledge-aware. In this work, we uncover a critical limitation of current LLMs -- their tendency to prioritize moral reasoning over commonsense understanding. To investigate this phenomenon, we introduce CoMoral, a novel benchmark dataset containing commonsense contradictions embedded within moral dilemmas. Through extensive evaluation of ten LLMs across different model sizes, we find that existing models consistently struggle to identify such contradictions without prior signal. Furthermore, we observe a pervasive narrative focus bias, wherein LLMs more readily detect commonsense contradictions when they are attributed to a secondary character rather than the primary (narrator) character. Our comprehensive analysis underscores the need for enhanced reasoning-aware training to improve the commonsense robustness of large language models.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-10T09:47:18Z</published>\n    <arxiv:comment>Accepted at LREC 2026</arxiv:comment>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Saugata Purkayastha</name>\n    </author>\n    <author>\n      <name>Pranav Kushare</name>\n    </author>\n    <author>\n      <name>Pragya Paramita Pal</name>\n    </author>\n    <author>\n      <name>Sukannya Purkayastha</name>\n    </author>\n  </entry>"
}