Research

Paper

AI LLM March 23, 2026

Multiperspectivity as a Resource for Narrative Similarity Prediction

Authors

Max Upravitelev, Veronika Solopova, Jing Yang, Charlott Jakob, Premtim Sahitaj, Ariana Sahitaj, Vera Schmitt

Abstract

Predicting narrative similarity can be understood as an inherently interpretive task: different, equally valid readings of the same text can produce divergent interpretations and thus different similarity judgments, posing a fundamental challenge for semantic evaluation benchmarks that encode a single ground truth. Rather than treating this multiperspectivity as a challenge to overcome, we propose to incorporate it in the decision making process of predictive systems. To explore this strategy, we created an ensemble of 31 LLM personas. These range from practitioners following interpretive frameworks to more intuitive, lay-style characters. Our experiments were conducted on the SemEval-2026 Task 4 dataset, where the system achieved an accuracy score of 0.705. Accuracy improves with ensemble size, consistent with Condorcet Jury Theorem-like dynamics under weakened independence. Practitioner personas perform worse individually but produce less correlated errors, yielding larger ensemble gains under majority voting. Our error analysis reveals a consistent negative association between gender-focused interpretive vocabulary and accuracy across all persona categories, suggesting either attention to dimensions not relevant for the benchmark or valid interpretations absent from the ground truth. This finding underscores the need for evaluation frameworks that account for interpretive plurality.

Metadata

arXiv ID: 2603.22103
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-23
Fetched: 2026-03-24 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.22103v1</id>\n    <title>Multiperspectivity as a Resource for Narrative Similarity Prediction</title>\n    <updated>2026-03-23T15:32:40Z</updated>\n    <link href='https://arxiv.org/abs/2603.22103v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.22103v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Predicting narrative similarity can be understood as an inherently interpretive task: different, equally valid readings of the same text can produce divergent interpretations and thus different similarity judgments, posing a fundamental challenge for semantic evaluation benchmarks that encode a single ground truth. Rather than treating this multiperspectivity as a challenge to overcome, we propose to incorporate it in the decision making process of predictive systems. To explore this strategy, we created an ensemble of 31 LLM personas. These range from practitioners following interpretive frameworks to more intuitive, lay-style characters. Our experiments were conducted on the SemEval-2026 Task 4 dataset, where the system achieved an accuracy score of 0.705. Accuracy improves with ensemble size, consistent with Condorcet Jury Theorem-like dynamics under weakened independence. Practitioner personas perform worse individually but produce less correlated errors, yielding larger ensemble gains under majority voting. Our error analysis reveals a consistent negative association between gender-focused interpretive vocabulary and accuracy across all persona categories, suggesting either attention to dimensions not relevant for the benchmark or valid interpretations absent from the ground truth. This finding underscores the need for evaluation frameworks that account for interpretive plurality.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-23T15:32:40Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Max Upravitelev</name>\n    </author>\n    <author>\n      <name>Veronika Solopova</name>\n    </author>\n    <author>\n      <name>Jing Yang</name>\n    </author>\n    <author>\n      <name>Charlott Jakob</name>\n    </author>\n    <author>\n      <name>Premtim Sahitaj</name>\n    </author>\n    <author>\n      <name>Ariana Sahitaj</name>\n    </author>\n    <author>\n      <name>Vera Schmitt</name>\n    </author>\n  </entry>"
}