Research

Paper

AI LLM March 02, 2026

Beyond Microservices: Testing Web-Scale RCA Methods on GPU-Driven LLM Workloads

Authors

Dominik Scheinert, Alexander Acker, Thorsten Wittkopp, Soeren Becker, Hamza Yous, Karnakar Reddy, Ibrahim Farhat, Hakim Hacid, Odej Kao

Abstract

Large language model (LLM) services have become an integral part of search, assistance, and decision-making applications. However, unlike traditional web or microservices, the hardware and software stack enabling LLM inference deployment is of higher complexity and far less field-tested, making it more susceptible to failures that are difficult to resolve. Keeping outage costs and quality of service degradations in check depends on shortening mean time to repair, which in practice is gated by how quickly the fault is identified, located, and diagnosed. Automated root cause analysis (RCA) accelerates failure localization by identifying the system component that failed and tracing how the failure propagated. Numerous RCA methods have been developed for traditional services, using request path tracing, resource metric and log data analysis. Yet, existing RCA methods have not been designed for LLM deployments that present distinct runtime characteristics. In this study, we evaluate the effectiveness of RCA methods on a best-practice LLM inference deployment under controlled failure injections. Across 24 methods (20 metric-based, two trace-based, and two multi-source), we find that multi-source approaches achieve the highest accuracy, metric-based methods show fault-type-dependent performance, and trace-based methods largely fail. These results reveal that existing RCA tools do not generalize to LLM systems, motivating tailored analysis techniques and enhanced observability, for which we formulate guidelines.

Metadata

arXiv ID: 2603.02057
Provider: ARXIV
Primary Category: cs.DC
Published: 2026-03-02
Fetched: 2026-03-03 04:34

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.02057v1</id>\n    <title>Beyond Microservices: Testing Web-Scale RCA Methods on GPU-Driven LLM Workloads</title>\n    <updated>2026-03-02T16:47:28Z</updated>\n    <link href='https://arxiv.org/abs/2603.02057v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.02057v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large language model (LLM) services have become an integral part of search, assistance, and decision-making applications. However, unlike traditional web or microservices, the hardware and software stack enabling LLM inference deployment is of higher complexity and far less field-tested, making it more susceptible to failures that are difficult to resolve. Keeping outage costs and quality of service degradations in check depends on shortening mean time to repair, which in practice is gated by how quickly the fault is identified, located, and diagnosed. Automated root cause analysis (RCA) accelerates failure localization by identifying the system component that failed and tracing how the failure propagated. Numerous RCA methods have been developed for traditional services, using request path tracing, resource metric and log data analysis. Yet, existing RCA methods have not been designed for LLM deployments that present distinct runtime characteristics. In this study, we evaluate the effectiveness of RCA methods on a best-practice LLM inference deployment under controlled failure injections. Across 24 methods (20 metric-based, two trace-based, and two multi-source), we find that multi-source approaches achieve the highest accuracy, metric-based methods show fault-type-dependent performance, and trace-based methods largely fail. These results reveal that existing RCA tools do not generalize to LLM systems, motivating tailored analysis techniques and enhanced observability, for which we formulate guidelines.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.DC'/>\n    <published>2026-03-02T16:47:28Z</published>\n    <arxiv:comment>13 pages, 8 figures, 1 table</arxiv:comment>\n    <arxiv:primary_category term='cs.DC'/>\n    <author>\n      <name>Dominik Scheinert</name>\n    </author>\n    <author>\n      <name>Alexander Acker</name>\n    </author>\n    <author>\n      <name>Thorsten Wittkopp</name>\n    </author>\n    <author>\n      <name>Soeren Becker</name>\n    </author>\n    <author>\n      <name>Hamza Yous</name>\n    </author>\n    <author>\n      <name>Karnakar Reddy</name>\n    </author>\n    <author>\n      <name>Ibrahim Farhat</name>\n    </author>\n    <author>\n      <name>Hakim Hacid</name>\n    </author>\n    <author>\n      <name>Odej Kao</name>\n    </author>\n  </entry>"
}