Paper
Beyond Explainable AI (XAI): An Overdue Paradigm Shift and Post-XAI Research Directions
Authors
Saleh Afroogh, Seyd Ishtiaque Ahmed, Petra Ahrweiler, David Alvarez-Melis, Mansur Maturidi Arief, Emilia Barakova, Falco J. Bargagli-Stoffi, Erdem Biyik, Hanjie Chen, Xiang 'Anthony' Chen, Robert Clements, Keeley Crockett, Amit Dhurandhar, Fethiye Irmak Dogan, Mollie Dollinger, Motahhare Eslami, Aldo A Faisal, Arya Farahi, Melanie Fernandez Pradie, Saadia Gabrie, Diego Garcia-Olano, Marzyeh Ghassemi, Shaona Ghosh, Hatice Gunes, Ehsan Hajiramezanali, Stefan Haufe, Biwei Huang, Angel Hwang, Md Tauhidul Islam, Junfeng Jiao, Amir-Hossein Karimi, Saber Kazeminasab, Anastasia Kuzminykh, William La Cava, Brian Y. Lim, Xiaofeng Liu, Mohammad R. K. Mofrad, Alicia Parrish, Maria Perez-Ortiz, Shriti Raj, Swabha Swayamdipta, Salmon Talebi, Kush R. Varshney, Mihaela Vorvoreanu, Lily Weng, Alice Xiang, Yiming Xu, Ding Zhao, Jieyu Zhao
Abstract
This study provides a cross-disciplinary examination of Explainable Artificial Intelligence (XAI) approaches-focusing on deep neural networks (DNNs) and large language models (LLMs)-and identifies empirical and conceptual limitations in current XAI. We discuss critical symptoms that stem from deeper root causes (i.e., two paradoxes, two conceptual confusions, and five false assumptions). These fundamental problems within the current XAI research field reveal three insights: experimentally, XAI exhibits significant flaws; conceptually, it is paradoxical; and pragmatically, further attempts to reform the paradoxical XAI might exacerbate its confusion-demanding fundamental shifts and new research directions. To move beyond XAI's limitations, we propose a four-pronged synthesized paradigm shift toward reliable and certified AI development. These four components include: verification-focused Interactive AI (IAI) to establish scientific community protocols for certifying AI system performance rather than attempting post-hoc explanations, AI Epistemology for rigorous scientific foundations, User-Sensible AI to create context-aware systems tailored to specific user communities, and Model-Centered Interpretability for faithful technical analysis-together offering comprehensive post-XAI research directions.
Metadata
Related papers
Vibe Coding XR: Accelerating AI + XR Prototyping with XR Blocks and Gemini
Ruofei Du, Benjamin Hersh, David Li, Nels Numan, Xun Qian, Yanhe Chen, Zhongy... • 2026-03-25
Comparing Developer and LLM Biases in Code Evaluation
Aditya Mittal, Ryan Shar, Zichu Wu, Shyam Agarwal, Tongshuang Wu, Chris Donah... • 2026-03-25
The Stochastic Gap: A Markovian Framework for Pre-Deployment Reliability and Oversight-Cost Auditing in Agentic Artificial Intelligence
Biplab Pal, Santanu Bhattacharya • 2026-03-25
Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA
Saahil Mathur, Ryan David Rittner, Vedant Ajit Thakur, Daniel Stuart Schiff, ... • 2026-03-25
MARCH: Multi-Agent Reinforced Self-Check for LLM Hallucination
Zhuo Li, Yupeng Zhang, Pengyu Cheng, Jiajun Song, Mengyu Zhou, Hao Li, Shujie... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2602.24176v1</id>\n <title>Beyond Explainable AI (XAI): An Overdue Paradigm Shift and Post-XAI Research Directions</title>\n <updated>2026-02-27T16:58:27Z</updated>\n <link href='https://arxiv.org/abs/2602.24176v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2602.24176v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>This study provides a cross-disciplinary examination of Explainable Artificial Intelligence (XAI) approaches-focusing on deep neural networks (DNNs) and large language models (LLMs)-and identifies empirical and conceptual limitations in current XAI. We discuss critical symptoms that stem from deeper root causes (i.e., two paradoxes, two conceptual confusions, and five false assumptions). These fundamental problems within the current XAI research field reveal three insights: experimentally, XAI exhibits significant flaws; conceptually, it is paradoxical; and pragmatically, further attempts to reform the paradoxical XAI might exacerbate its confusion-demanding fundamental shifts and new research directions. To move beyond XAI's limitations, we propose a four-pronged synthesized paradigm shift toward reliable and certified AI development. These four components include: verification-focused Interactive AI (IAI) to establish scientific community protocols for certifying AI system performance rather than attempting post-hoc explanations, AI Epistemology for rigorous scientific foundations, User-Sensible AI to create context-aware systems tailored to specific user communities, and Model-Centered Interpretability for faithful technical analysis-together offering comprehensive post-XAI research directions.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.CY'/>\n <published>2026-02-27T16:58:27Z</published>\n <arxiv:primary_category term='cs.CY'/>\n <author>\n <name>Saleh Afroogh</name>\n </author>\n <author>\n <name>Seyd Ishtiaque Ahmed</name>\n </author>\n <author>\n <name>Petra Ahrweiler</name>\n </author>\n <author>\n <name>David Alvarez-Melis</name>\n </author>\n <author>\n <name>Mansur Maturidi Arief</name>\n </author>\n <author>\n <name>Emilia Barakova</name>\n </author>\n <author>\n <name>Falco J. Bargagli-Stoffi</name>\n </author>\n <author>\n <name>Erdem Biyik</name>\n </author>\n <author>\n <name>Hanjie Chen</name>\n </author>\n <author>\n <name>Xiang 'Anthony' Chen</name>\n </author>\n <author>\n <name>Robert Clements</name>\n </author>\n <author>\n <name>Keeley Crockett</name>\n </author>\n <author>\n <name>Amit Dhurandhar</name>\n </author>\n <author>\n <name>Fethiye Irmak Dogan</name>\n </author>\n <author>\n <name>Mollie Dollinger</name>\n </author>\n <author>\n <name>Motahhare Eslami</name>\n </author>\n <author>\n <name>Aldo A Faisal</name>\n </author>\n <author>\n <name>Arya Farahi</name>\n </author>\n <author>\n <name>Melanie Fernandez Pradie</name>\n </author>\n <author>\n <name>Saadia Gabrie</name>\n </author>\n <author>\n <name>Diego Garcia-Olano</name>\n </author>\n <author>\n <name>Marzyeh Ghassemi</name>\n </author>\n <author>\n <name>Shaona Ghosh</name>\n </author>\n <author>\n <name>Hatice Gunes</name>\n </author>\n <author>\n <name>Ehsan Hajiramezanali</name>\n </author>\n <author>\n <name>Stefan Haufe</name>\n </author>\n <author>\n <name>Biwei Huang</name>\n </author>\n <author>\n <name>Angel Hwang</name>\n </author>\n <author>\n <name>Md Tauhidul Islam</name>\n </author>\n <author>\n <name>Junfeng Jiao</name>\n </author>\n <author>\n <name>Amir-Hossein Karimi</name>\n </author>\n <author>\n <name>Saber Kazeminasab</name>\n </author>\n <author>\n <name>Anastasia Kuzminykh</name>\n </author>\n <author>\n <name>William La Cava</name>\n </author>\n <author>\n <name>Brian Y. Lim</name>\n </author>\n <author>\n <name>Xiaofeng Liu</name>\n </author>\n <author>\n <name>Mohammad R. K. Mofrad</name>\n </author>\n <author>\n <name>Alicia Parrish</name>\n </author>\n <author>\n <name>Maria Perez-Ortiz</name>\n </author>\n <author>\n <name>Shriti Raj</name>\n </author>\n <author>\n <name>Swabha Swayamdipta</name>\n </author>\n <author>\n <name>Salmon Talebi</name>\n </author>\n <author>\n <name>Kush R. Varshney</name>\n </author>\n <author>\n <name>Mihaela Vorvoreanu</name>\n </author>\n <author>\n <name>Lily Weng</name>\n </author>\n <author>\n <name>Alice Xiang</name>\n </author>\n <author>\n <name>Yiming Xu</name>\n </author>\n <author>\n <name>Ding Zhao</name>\n </author>\n <author>\n <name>Jieyu Zhao</name>\n </author>\n </entry>"
}