Paper
Beyond Microservices: Testing Web-Scale RCA Methods on GPU-Driven LLM Workloads
Authors
Dominik Scheinert, Alexander Acker, Thorsten Wittkopp, Soeren Becker, Hamza Yous, Karnakar Reddy, Ibrahim Farhat, Hakim Hacid, Odej Kao
Abstract
Large language model (LLM) services have become an integral part of search, assistance, and decision-making applications. However, unlike traditional web or microservices, the hardware and software stack enabling LLM inference deployment is of higher complexity and far less field-tested, making it more susceptible to failures that are difficult to resolve. Keeping outage costs and quality of service degradations in check depends on shortening mean time to repair, which in practice is gated by how quickly the fault is identified, located, and diagnosed. Automated root cause analysis (RCA) accelerates failure localization by identifying the system component that failed and tracing how the failure propagated. Numerous RCA methods have been developed for traditional services, using request path tracing, resource metric and log data analysis. Yet, existing RCA methods have not been designed for LLM deployments that present distinct runtime characteristics. In this study, we evaluate the effectiveness of RCA methods on a best-practice LLM inference deployment under controlled failure injections. Across 24 methods (20 metric-based, two trace-based, and two multi-source), we find that multi-source approaches achieve the highest accuracy, metric-based methods show fault-type-dependent performance, and trace-based methods largely fail. These results reveal that existing RCA tools do not generalize to LLM systems, motivating tailored analysis techniques and enhanced observability, for which we formulate guidelines.
Metadata
Related papers
Vibe Coding XR: Accelerating AI + XR Prototyping with XR Blocks and Gemini
Ruofei Du, Benjamin Hersh, David Li, Nels Numan, Xun Qian, Yanhe Chen, Zhongy... • 2026-03-25
Comparing Developer and LLM Biases in Code Evaluation
Aditya Mittal, Ryan Shar, Zichu Wu, Shyam Agarwal, Tongshuang Wu, Chris Donah... • 2026-03-25
The Stochastic Gap: A Markovian Framework for Pre-Deployment Reliability and Oversight-Cost Auditing in Agentic Artificial Intelligence
Biplab Pal, Santanu Bhattacharya • 2026-03-25
Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA
Saahil Mathur, Ryan David Rittner, Vedant Ajit Thakur, Daniel Stuart Schiff, ... • 2026-03-25
MARCH: Multi-Agent Reinforced Self-Check for LLM Hallucination
Zhuo Li, Yupeng Zhang, Pengyu Cheng, Jiajun Song, Mengyu Zhou, Hao Li, Shujie... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.02057v1</id>\n <title>Beyond Microservices: Testing Web-Scale RCA Methods on GPU-Driven LLM Workloads</title>\n <updated>2026-03-02T16:47:28Z</updated>\n <link href='https://arxiv.org/abs/2603.02057v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.02057v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>Large language model (LLM) services have become an integral part of search, assistance, and decision-making applications. However, unlike traditional web or microservices, the hardware and software stack enabling LLM inference deployment is of higher complexity and far less field-tested, making it more susceptible to failures that are difficult to resolve. Keeping outage costs and quality of service degradations in check depends on shortening mean time to repair, which in practice is gated by how quickly the fault is identified, located, and diagnosed. Automated root cause analysis (RCA) accelerates failure localization by identifying the system component that failed and tracing how the failure propagated. Numerous RCA methods have been developed for traditional services, using request path tracing, resource metric and log data analysis. Yet, existing RCA methods have not been designed for LLM deployments that present distinct runtime characteristics. In this study, we evaluate the effectiveness of RCA methods on a best-practice LLM inference deployment under controlled failure injections. Across 24 methods (20 metric-based, two trace-based, and two multi-source), we find that multi-source approaches achieve the highest accuracy, metric-based methods show fault-type-dependent performance, and trace-based methods largely fail. These results reveal that existing RCA tools do not generalize to LLM systems, motivating tailored analysis techniques and enhanced observability, for which we formulate guidelines.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.DC'/>\n <published>2026-03-02T16:47:28Z</published>\n <arxiv:comment>13 pages, 8 figures, 1 table</arxiv:comment>\n <arxiv:primary_category term='cs.DC'/>\n <author>\n <name>Dominik Scheinert</name>\n </author>\n <author>\n <name>Alexander Acker</name>\n </author>\n <author>\n <name>Thorsten Wittkopp</name>\n </author>\n <author>\n <name>Soeren Becker</name>\n </author>\n <author>\n <name>Hamza Yous</name>\n </author>\n <author>\n <name>Karnakar Reddy</name>\n </author>\n <author>\n <name>Ibrahim Farhat</name>\n </author>\n <author>\n <name>Hakim Hacid</name>\n </author>\n <author>\n <name>Odej Kao</name>\n </author>\n </entry>"
}