Research

Paper

AI LLM March 03, 2026

A Browser-based Open Source Assistant for Multimodal Content Verification

Authors

Rosanna Milner, Michael Foster, Olesya Razuvayevskaya, Ian Roberts, Valentin Porcellini, Denis Teyssou, Kalina Bontcheva

Abstract

Disinformation and false content produced by generative AI pose a significant challenge for journalists and fact-checkers who must rapidly verify digital media information. While there is an abundance of NLP models for detecting credibility signals such as persuasion techniques, subjectivity, or machine-generated text, such methods often remain inaccessible to non-expert users and are not integrated into their daily workflows as a unified framework. This paper demonstrates the VERIFICATION ASSISTANT, a browser-based tool designed to bridge this gap. The VERIFICATION ASSISTANT, a core component of the widely adopted VERIFICATION PLUGIN (140,000+ users), allows users to submit URLs or media files to a unified interface. It automatically extracts content and routes it to a suite of backend NLP classifiers, delivering actionable credibility signals, estimating AI-generated content, and providing other verification guidance in a clear, easy-to-digest format. This paper showcases the tool architecture, its integration of multiple NLP services, and its real-world application to detecting disinformation.

Metadata

arXiv ID: 2603.02842
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-03
Fetched: 2026-03-04 03:41

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.02842v1</id>\n    <title>A Browser-based Open Source Assistant for Multimodal Content Verification</title>\n    <updated>2026-03-03T10:39:32Z</updated>\n    <link href='https://arxiv.org/abs/2603.02842v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.02842v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Disinformation and false content produced by generative AI pose a significant challenge for journalists and fact-checkers who must rapidly verify digital media information. While there is an abundance of NLP models for detecting credibility signals such as persuasion techniques, subjectivity, or machine-generated text, such methods often remain inaccessible to non-expert users and are not integrated into their daily workflows as a unified framework. This paper demonstrates the VERIFICATION ASSISTANT, a browser-based tool designed to bridge this gap. The VERIFICATION ASSISTANT, a core component of the widely adopted VERIFICATION PLUGIN (140,000+ users), allows users to submit URLs or media files to a unified interface. It automatically extracts content and routes it to a suite of backend NLP classifiers, delivering actionable credibility signals, estimating AI-generated content, and providing other verification guidance in a clear, easy-to-digest format. This paper showcases the tool architecture, its integration of multiple NLP services, and its real-world application to detecting disinformation.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-03T10:39:32Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Rosanna Milner</name>\n    </author>\n    <author>\n      <name>Michael Foster</name>\n    </author>\n    <author>\n      <name>Olesya Razuvayevskaya</name>\n    </author>\n    <author>\n      <name>Ian Roberts</name>\n    </author>\n    <author>\n      <name>Valentin Porcellini</name>\n    </author>\n    <author>\n      <name>Denis Teyssou</name>\n    </author>\n    <author>\n      <name>Kalina Bontcheva</name>\n    </author>\n  </entry>"
}