Research

Paper

AI LLM February 19, 2026

Preserving Historical Truth: Detecting Historical Revisionism in Large Language Models

Authors

Francesco Ortu, Joeun Yook, Punya Syon Pandey, Keenan Samway, Bernhard Schölkopf, Alberto Cazzaniga, Rada Mihalcea, Zhijing Jin

Abstract

Large language models (LLMs) are increasingly used as sources of historical information, motivating the need for scalable audits on contested events and politically charged narratives in settings that mirror real user interactions. We introduce \textsc{\texttt{HistoricalMisinfo}}, a curated dataset of $500$ contested events from $45$ countries, each paired with a factual reference narrative and a documented revisionist reference narrative. To approximate real-world usage, we instantiate each event in $11$ prompt scenarios that reflect common communication settings (e.g., questions, textbooks, social posts, policy briefs). Using an LLM-as-a-judge protocol that compares model outputs to the two references, we evaluate LLMs varying across model architectures in two conditions: (i) neutral user prompts that ask for factually accurate information, and (ii) robustness prompts in which the user explicitly requests the revisionist version of the event. Under neutral prompts, models are generally closer to factual references, though the resulting scores should be interpreted as reference-alignment signals rather than definitive evidence of human-interpretable revisionism. Robustness prompting yields a strong and consistent effect: when the user requests the revisionist narrative, all evaluated models show sharply higher revisionism scores, indicating limited resistance or self-correction. \textsc{\texttt{HistoricalMisinfo}} provides a practical foundation for benchmarking robustness to revisionist framing and for guiding future work on more precise automatic evaluation of contested historical claims to ensure a sustainable integration of AI systems within society.

Metadata

arXiv ID: 2602.17433
Provider: ARXIV
Primary Category: cs.CY
Published: 2026-02-19
Fetched: 2026-02-21 18:51

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.17433v1</id>\n    <title>Preserving Historical Truth: Detecting Historical Revisionism in Large Language Models</title>\n    <updated>2026-02-19T15:05:10Z</updated>\n    <link href='https://arxiv.org/abs/2602.17433v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.17433v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large language models (LLMs) are increasingly used as sources of historical information, motivating the need for scalable audits on contested events and politically charged narratives in settings that mirror real user interactions. We introduce \\textsc{\\texttt{HistoricalMisinfo}}, a curated dataset of $500$ contested events from $45$ countries, each paired with a factual reference narrative and a documented revisionist reference narrative. To approximate real-world usage, we instantiate each event in $11$ prompt scenarios that reflect common communication settings (e.g., questions, textbooks, social posts, policy briefs). Using an LLM-as-a-judge protocol that compares model outputs to the two references, we evaluate LLMs varying across model architectures in two conditions: (i) neutral user prompts that ask for factually accurate information, and (ii) robustness prompts in which the user explicitly requests the revisionist version of the event. Under neutral prompts, models are generally closer to factual references, though the resulting scores should be interpreted as reference-alignment signals rather than definitive evidence of human-interpretable revisionism. Robustness prompting yields a strong and consistent effect: when the user requests the revisionist narrative, all evaluated models show sharply higher revisionism scores, indicating limited resistance or self-correction. \\textsc{\\texttt{HistoricalMisinfo}} provides a practical foundation for benchmarking robustness to revisionist framing and for guiding future work on more precise automatic evaluation of contested historical claims to ensure a sustainable integration of AI systems within society.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CY'/>\n    <published>2026-02-19T15:05:10Z</published>\n    <arxiv:primary_category term='cs.CY'/>\n    <author>\n      <name>Francesco Ortu</name>\n    </author>\n    <author>\n      <name>Joeun Yook</name>\n    </author>\n    <author>\n      <name>Punya Syon Pandey</name>\n    </author>\n    <author>\n      <name>Keenan Samway</name>\n    </author>\n    <author>\n      <name>Bernhard Schölkopf</name>\n    </author>\n    <author>\n      <name>Alberto Cazzaniga</name>\n    </author>\n    <author>\n      <name>Rada Mihalcea</name>\n    </author>\n    <author>\n      <name>Zhijing Jin</name>\n    </author>\n  </entry>"
}