Research

Paper

AI LLM February 19, 2026

Human attribution of empathic behaviour to AI systems

Authors

Jonas Festor, Ivo Snels, Bennett Kleinberg

Abstract

Artificial intelligence systems increasingly generate text intended to provide social and emotional support. Understanding how users perceive empathic qualities in such content is therefore critical. We examined differences in perceived empathy signals between human-written and large language model (LLM)-generated relationship advice, and the influence of authorship labels. Across two preregistered experiments (Study 1: n = 641; Study 2: n = 500), participants rated advice texts on overall quality and perceived cognitive, emotional, and motivational empathy. Multilevel models accounted for the nested rating structure. LLM-generated advice was consistently perceived as higher in overall quality, cognitive empathy, and motivational empathy. Evidence for a widely reported negativity bias toward AI-labelled content was limited. Emotional empathy showed no consistent source advantage. Individual differences in AI attitudes modestly influenced judgments but did not alter the overall pattern. These findings suggest that perceptions of empathic communication are primarily driven by linguistic features rather than authorship beliefs, with implications for the design of AI-mediated support systems.

Metadata

arXiv ID: 2602.17293
Provider: ARXIV
Primary Category: cs.CY
Published: 2026-02-19
Fetched: 2026-02-21 18:51

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.17293v1</id>\n    <title>Human attribution of empathic behaviour to AI systems</title>\n    <updated>2026-02-19T11:57:06Z</updated>\n    <link href='https://arxiv.org/abs/2602.17293v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.17293v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Artificial intelligence systems increasingly generate text intended to provide social and emotional support. Understanding how users perceive empathic qualities in such content is therefore critical. We examined differences in perceived empathy signals between human-written and large language model (LLM)-generated relationship advice, and the influence of authorship labels. Across two preregistered experiments (Study 1: n = 641; Study 2: n = 500), participants rated advice texts on overall quality and perceived cognitive, emotional, and motivational empathy. Multilevel models accounted for the nested rating structure. LLM-generated advice was consistently perceived as higher in overall quality, cognitive empathy, and motivational empathy. Evidence for a widely reported negativity bias toward AI-labelled content was limited. Emotional empathy showed no consistent source advantage. Individual differences in AI attitudes modestly influenced judgments but did not alter the overall pattern. These findings suggest that perceptions of empathic communication are primarily driven by linguistic features rather than authorship beliefs, with implications for the design of AI-mediated support systems.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CY'/>\n    <published>2026-02-19T11:57:06Z</published>\n    <arxiv:comment>preprint</arxiv:comment>\n    <arxiv:primary_category term='cs.CY'/>\n    <author>\n      <name>Jonas Festor</name>\n    </author>\n    <author>\n      <name>Ivo Snels</name>\n    </author>\n    <author>\n      <name>Bennett Kleinberg</name>\n    </author>\n  </entry>"
}