Research

Paper

AI LLM February 23, 2026

"The explanation makes sense": An Empirical Study on LLM Performance in News Classification and its Influence on Judgment in Human-AI Collaborative Annotation

Authors

Qile Wang, Prerana Khatiwada, Avinash Chouhan, Ashrey Mahesh, Joy Mwaria, Duy Duc Tran, Kenneth E. Barner, Matthew Louis Mauriello

Abstract

The spread of media bias is a significant concern as political discourse shapes beliefs and opinions. Addressing this challenge computationally requires improved methods for interpreting news. While large language models (LLMs) can scale classification tasks, concerns remain about their trustworthiness. To advance human-AI collaboration, we investigate the feasibility of using LLMs to classify U.S. news by political ideology and examine their effect on user decision-making. We first compared GPT models with prompt engineering to state-of-the-art supervised machine learning on a 34k public dataset. We then collected 17k news articles and tested GPT-4 predictions with brief and detailed explanations. In a between-subjects study (N=124), we evaluated how LLM-generated explanations influence human annotation, judgment, and confidence. Results show that AI assistance significantly increases confidence ($p<.001$), with detailed explanations more persuasive and more likely to alter decisions. We highlight recommendations for AI explanations through thematic analysis and provide our dataset for further research.

Metadata

arXiv ID: 2602.19690
Provider: ARXIV
Primary Category: cs.HC
Published: 2026-02-23
Fetched: 2026-02-24 04:38

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.19690v1</id>\n    <title>\"The explanation makes sense\": An Empirical Study on LLM Performance in News Classification and its Influence on Judgment in Human-AI Collaborative Annotation</title>\n    <updated>2026-02-23T10:37:55Z</updated>\n    <link href='https://arxiv.org/abs/2602.19690v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.19690v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>The spread of media bias is a significant concern as political discourse shapes beliefs and opinions. Addressing this challenge computationally requires improved methods for interpreting news. While large language models (LLMs) can scale classification tasks, concerns remain about their trustworthiness. To advance human-AI collaboration, we investigate the feasibility of using LLMs to classify U.S. news by political ideology and examine their effect on user decision-making. We first compared GPT models with prompt engineering to state-of-the-art supervised machine learning on a 34k public dataset. We then collected 17k news articles and tested GPT-4 predictions with brief and detailed explanations. In a between-subjects study (N=124), we evaluated how LLM-generated explanations influence human annotation, judgment, and confidence. Results show that AI assistance significantly increases confidence ($p&lt;.001$), with detailed explanations more persuasive and more likely to alter decisions. We highlight recommendations for AI explanations through thematic analysis and provide our dataset for further research.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.HC'/>\n    <published>2026-02-23T10:37:55Z</published>\n    <arxiv:primary_category term='cs.HC'/>\n    <author>\n      <name>Qile Wang</name>\n    </author>\n    <author>\n      <name>Prerana Khatiwada</name>\n    </author>\n    <author>\n      <name>Avinash Chouhan</name>\n    </author>\n    <author>\n      <name>Ashrey Mahesh</name>\n    </author>\n    <author>\n      <name>Joy Mwaria</name>\n    </author>\n    <author>\n      <name>Duy Duc Tran</name>\n    </author>\n    <author>\n      <name>Kenneth E. Barner</name>\n    </author>\n    <author>\n      <name>Matthew Louis Mauriello</name>\n    </author>\n  </entry>"
}