Paper
Human, AI, and Hybrid Ensembles for Detection of Adaptive, RL-based Social Bots
Authors
Valerio La Gatta, Nathan Subrahmanian, Kaitlyn Wang, Larry Birnbaum, V. S. Subrahmanian
Abstract
The use of reinforcement learning to dynamically adapt and evade detection is now well-documented in several cybersecurity settings including Covert Social Influence Operations (CSIOs), in which bots try to spread disinformation. While AI bot detectors have improved greatly, they are largely limited to detecting static bots that do not adapt dynamically. We present the first systematic study comparing the ability of humans, AI models, and hybrid Human-AI ensembles in detecting adaptive bots powered by reinforcement learning. Using data from a controlled, IRB-approved, five-day experiment with participants interacting on a social media platform infiltrated by RL-trained bots spreading disinformation to influence participants on 4 topics, we examine factors potentially shaping human detection capabilities: demographic characteristics, temporal learning effects, social network position, engagement patterns, and collective intelligence mechanisms. We first test 13 hypotheses comparing human bot detection performance against state-of-the-art AI approaches utilizing both traditional machine learning and large language models. We further investigate several aggregation strategies that combine human reports of bots with AI predictions, as well as retraining protocols that leverage human supervision. Our findings challenge intuitive assumptions about bot detection, reveal unexpected patterns in how humans identify bots, and show that combining human bot reports with AI predictions outperforms humans alone and AI alone. We conclude with a discussion of the practical implications of these results for industry.
Metadata
Related papers
Fractal universe and quantum gravity made simple
Fabio Briscese, Gianluca Calcagni • 2026-03-25
POLY-SIM: Polyglot Speaker Identification with Missing Modality Grand Challenge 2026 Evaluation Plan
Marta Moscati, Muhammad Saad Saeed, Marina Zanoni, Mubashir Noman, Rohan Kuma... • 2026-03-25
LensWalk: Agentic Video Understanding by Planning How You See in Videos
Keliang Li, Yansong Li, Hongze Shen, Mengdi Liu, Hong Chang, Shiguang Shan • 2026-03-25
Orientation Reconstruction of Proteins using Coulomb Explosions
Tomas André, Alfredo Bellisario, Nicusor Timneanu, Carl Caleman • 2026-03-25
The role of spatial context and multitask learning in the detection of organic and conventional farming systems based on Sentinel-2 time series
Jan Hemmerling, Marcel Schwieder, Philippe Rufin, Leon-Friedrich Thomas, Mire... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.23796v1</id>\n <title>Human, AI, and Hybrid Ensembles for Detection of Adaptive, RL-based Social Bots</title>\n <updated>2026-03-25T00:10:28Z</updated>\n <link href='https://arxiv.org/abs/2603.23796v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.23796v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>The use of reinforcement learning to dynamically adapt and evade detection is now well-documented in several cybersecurity settings including Covert Social Influence Operations (CSIOs), in which bots try to spread disinformation. While AI bot detectors have improved greatly, they are largely limited to detecting static bots that do not adapt dynamically. We present the first systematic study comparing the ability of humans, AI models, and hybrid Human-AI ensembles in detecting adaptive bots powered by reinforcement learning. Using data from a controlled, IRB-approved, five-day experiment with participants interacting on a social media platform infiltrated by RL-trained bots spreading disinformation to influence participants on 4 topics, we examine factors potentially shaping human detection capabilities: demographic characteristics, temporal learning effects, social network position, engagement patterns, and collective intelligence mechanisms. We first test 13 hypotheses comparing human bot detection performance against state-of-the-art AI approaches utilizing both traditional machine learning and large language models. We further investigate several aggregation strategies that combine human reports of bots with AI predictions, as well as retraining protocols that leverage human supervision. Our findings challenge intuitive assumptions about bot detection, reveal unexpected patterns in how humans identify bots, and show that combining human bot reports with AI predictions outperforms humans alone and AI alone. We conclude with a discussion of the practical implications of these results for industry.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.SI'/>\n <published>2026-03-25T00:10:28Z</published>\n <arxiv:comment>Under review</arxiv:comment>\n <arxiv:primary_category term='cs.SI'/>\n <author>\n <name>Valerio La Gatta</name>\n </author>\n <author>\n <name>Nathan Subrahmanian</name>\n </author>\n <author>\n <name>Kaitlyn Wang</name>\n </author>\n <author>\n <name>Larry Birnbaum</name>\n </author>\n <author>\n <name>V. S. Subrahmanian</name>\n </author>\n </entry>"
}