Research

Paper

AI LLM March 10, 2026

Enhancing Debunking Effectiveness through LLM-based Personality Adaptation

Authors

Pietro Dell'Oglio, Alessandro Bondielli, Francesco Marcelloni, Lucia C. Passaro

Abstract

This study proposes a novel methodology for generating personalized fake news debunking messages by prompting Large Language Models (LLMs) with persona-based inputs aligned to the Big Five personality traits: Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness. Our approach guides LLMs to transform generic debunking content into personalized versions tailored to specific personality profiles. To assess the effectiveness of these transformations, we employ a separate LLM as an automated evaluator simulating corresponding personality traits, thereby eliminating the need for costly human evaluation panels. Our results show that personalized messages are generally seen as more persuasive than generic ones. We also find that traits like Openness tend to increase persuadability, while Neuroticism can lower it. Differences between LLM evaluators suggest that using multiple models provides a clearer picture. Overall, this work demonstrates a practical way to create more targeted debunking messages exploiting LLMs, while also raising important ethical questions about how such technology might be used.

Metadata

arXiv ID: 2603.09533
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-03-10
Fetched: 2026-03-11 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.09533v1</id>\n    <title>Enhancing Debunking Effectiveness through LLM-based Personality Adaptation</title>\n    <updated>2026-03-10T11:44:17Z</updated>\n    <link href='https://arxiv.org/abs/2603.09533v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.09533v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>This study proposes a novel methodology for generating personalized fake news debunking messages by prompting Large Language Models (LLMs) with persona-based inputs aligned to the Big Five personality traits: Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness. Our approach guides LLMs to transform generic debunking content into personalized versions tailored to specific personality profiles. To assess the effectiveness of these transformations, we employ a separate LLM as an automated evaluator simulating corresponding personality traits, thereby eliminating the need for costly human evaluation panels. Our results show that personalized messages are generally seen as more persuasive than generic ones. We also find that traits like Openness tend to increase persuadability, while Neuroticism can lower it. Differences between LLM evaluators suggest that using multiple models provides a clearer picture. Overall, this work demonstrates a practical way to create more targeted debunking messages exploiting LLMs, while also raising important ethical questions about how such technology might be used.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-10T11:44:17Z</published>\n    <arxiv:comment>In: Computational Intelligence. IJCCI 2025. Springer, Cham (2026)</arxiv:comment>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Pietro Dell'Oglio</name>\n    </author>\n    <author>\n      <name>Alessandro Bondielli</name>\n    </author>\n    <author>\n      <name>Francesco Marcelloni</name>\n    </author>\n    <author>\n      <name>Lucia C. Passaro</name>\n    </author>\n    <arxiv:doi>10.1007/978-3-032-15632-7_23</arxiv:doi>\n    <link href='https://doi.org/10.1007/978-3-032-15632-7_23' rel='related' title='doi'/>\n  </entry>"
}