Paper
InterveneBench: Benchmarking LLMs for Intervention Reasoning and Causal Study Design in Real Social Systems
Authors
Shaojie Shi, Zhengyu Shi, Lingran Zheng, Xinyu Su, Anna Xie, Bohao Lv, Rui Xu, Zijian Chen, Zhichao Chen, Guolei Liu, Naifu Zhang, Mingjian Dong, Zhuo Quan, Bohao Chen, Teqi Hao, Yuan Qi, Yinghui Xu, Libo Wu
Abstract
Causal inference in social science relies on end-to-end, intervention-centered research-design reasoning grounded in real-world policy interventions, but current benchmarks fail to evaluate this capability of large language models (LLMs). We present InterveneBench, a benchmark designed to assess such reasoning in realistic social settings. Each instance in InterveneBench is derived from an empirical social science study and requires models to reason about policy interventions and identification assumptions without access to predefined causal graphs or structural equations. InterveneBench comprises 744 peer-reviewed studies across diverse policy domains. Experimental results show that state-of-the-art LLMs struggle under this setting. To address this limitation, we further propose a multi-agent framework, STRIDES. It achieves significant performance improvements over state-of-the-art reasoning models. Our code and data are available at https://github.com/Sii-yuning/STRIDES.
Metadata
Related papers
Vibe Coding XR: Accelerating AI + XR Prototyping with XR Blocks and Gemini
Ruofei Du, Benjamin Hersh, David Li, Nels Numan, Xun Qian, Yanhe Chen, Zhongy... • 2026-03-25
Comparing Developer and LLM Biases in Code Evaluation
Aditya Mittal, Ryan Shar, Zichu Wu, Shyam Agarwal, Tongshuang Wu, Chris Donah... • 2026-03-25
The Stochastic Gap: A Markovian Framework for Pre-Deployment Reliability and Oversight-Cost Auditing in Agentic Artificial Intelligence
Biplab Pal, Santanu Bhattacharya • 2026-03-25
Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA
Saahil Mathur, Ryan David Rittner, Vedant Ajit Thakur, Daniel Stuart Schiff, ... • 2026-03-25
MARCH: Multi-Agent Reinforced Self-Check for LLM Hallucination
Zhuo Li, Yupeng Zhang, Pengyu Cheng, Jiajun Song, Mengyu Zhou, Hao Li, Shujie... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.15542v1</id>\n <title>InterveneBench: Benchmarking LLMs for Intervention Reasoning and Causal Study Design in Real Social Systems</title>\n <updated>2026-03-16T17:06:37Z</updated>\n <link href='https://arxiv.org/abs/2603.15542v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.15542v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>Causal inference in social science relies on end-to-end, intervention-centered research-design reasoning grounded in real-world policy interventions, but current benchmarks fail to evaluate this capability of large language models (LLMs). We present InterveneBench, a benchmark designed to assess such reasoning in realistic social settings. Each instance in InterveneBench is derived from an empirical social science study and requires models to reason about policy interventions and identification assumptions without access to predefined causal graphs or structural equations. InterveneBench comprises 744 peer-reviewed studies across diverse policy domains. Experimental results show that state-of-the-art LLMs struggle under this setting. To address this limitation, we further propose a multi-agent framework, STRIDES. It achieves significant performance improvements over state-of-the-art reasoning models. Our code and data are available at https://github.com/Sii-yuning/STRIDES.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.CY'/>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n <published>2026-03-16T17:06:37Z</published>\n <arxiv:comment>35pages,3 figures</arxiv:comment>\n <arxiv:primary_category term='cs.CY'/>\n <author>\n <name>Shaojie Shi</name>\n </author>\n <author>\n <name>Zhengyu Shi</name>\n </author>\n <author>\n <name>Lingran Zheng</name>\n </author>\n <author>\n <name>Xinyu Su</name>\n </author>\n <author>\n <name>Anna Xie</name>\n </author>\n <author>\n <name>Bohao Lv</name>\n </author>\n <author>\n <name>Rui Xu</name>\n </author>\n <author>\n <name>Zijian Chen</name>\n </author>\n <author>\n <name>Zhichao Chen</name>\n </author>\n <author>\n <name>Guolei Liu</name>\n </author>\n <author>\n <name>Naifu Zhang</name>\n </author>\n <author>\n <name>Mingjian Dong</name>\n </author>\n <author>\n <name>Zhuo Quan</name>\n </author>\n <author>\n <name>Bohao Chen</name>\n </author>\n <author>\n <name>Teqi Hao</name>\n </author>\n <author>\n <name>Yuan Qi</name>\n </author>\n <author>\n <name>Yinghui Xu</name>\n </author>\n <author>\n <name>Libo Wu</name>\n </author>\n </entry>"
}