Paper
ReviveMoE: Fast Recovery for Hardware Failures in Large-Scale MoE LLM Inference Deployments
Authors
Haley Li, Xinglu Wang, Cong Feng, Chunxu Zuo, Yanan Wang, Hei Lo, Yufei Cui, Bingji Wang, Duo Cui, Shuming Jing, Yizhou Shan, Ying Xiong, Jiannan Wang, Yong Zhang, Zhenan Fan
Abstract
As LLM deployments scale over more hardware, the probability of a single failure in a system increases significantly, and cloud operators must consider robust countermeasures to handle these inevitable failures. A common recovery approach is to simply restart the LLM serving instance; however, this is costly in model-as-a-service (MaaS) inference settings, where reloading model weights and recompiling computation graphs can introduce significant delays to incoming requests. We propose ReviveMoE, a method for rapid failure recovery in large-scale LLM deployments without restarting the serving instance. ReviveMoE is designed to support both the traditional LLM architecture, which collocates MoE and attention on the same hardware, and the disaggregated architectures, which separate MoE from attention. Integrated into Huawei Cloud's MaaS, ReviveMoE is built on top of Huawei's xDeepServe serving platform and the XCCL communications library.
Metadata
Related papers
Vibe Coding XR: Accelerating AI + XR Prototyping with XR Blocks and Gemini
Ruofei Du, Benjamin Hersh, David Li, Nels Numan, Xun Qian, Yanhe Chen, Zhongy... • 2026-03-25
Comparing Developer and LLM Biases in Code Evaluation
Aditya Mittal, Ryan Shar, Zichu Wu, Shyam Agarwal, Tongshuang Wu, Chris Donah... • 2026-03-25
The Stochastic Gap: A Markovian Framework for Pre-Deployment Reliability and Oversight-Cost Auditing in Agentic Artificial Intelligence
Biplab Pal, Santanu Bhattacharya • 2026-03-25
Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA
Saahil Mathur, Ryan David Rittner, Vedant Ajit Thakur, Daniel Stuart Schiff, ... • 2026-03-25
MARCH: Multi-Agent Reinforced Self-Check for LLM Hallucination
Zhuo Li, Yupeng Zhang, Pengyu Cheng, Jiajun Song, Mengyu Zhou, Hao Li, Shujie... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2602.21140v1</id>\n <title>ReviveMoE: Fast Recovery for Hardware Failures in Large-Scale MoE LLM Inference Deployments</title>\n <updated>2026-02-24T17:39:41Z</updated>\n <link href='https://arxiv.org/abs/2602.21140v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2602.21140v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>As LLM deployments scale over more hardware, the probability of a single failure in a system increases significantly, and cloud operators must consider robust countermeasures to handle these inevitable failures. A common recovery approach is to simply restart the LLM serving instance; however, this is costly in model-as-a-service (MaaS) inference settings, where reloading model weights and recompiling computation graphs can introduce significant delays to incoming requests. We propose ReviveMoE, a method for rapid failure recovery in large-scale LLM deployments without restarting the serving instance. ReviveMoE is designed to support both the traditional LLM architecture, which collocates MoE and attention on the same hardware, and the disaggregated architectures, which separate MoE from attention. Integrated into Huawei Cloud's MaaS, ReviveMoE is built on top of Huawei's xDeepServe serving platform and the XCCL communications library.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.DC'/>\n <published>2026-02-24T17:39:41Z</published>\n <arxiv:comment>21 pages, 6 figures</arxiv:comment>\n <arxiv:primary_category term='cs.DC'/>\n <author>\n <name>Haley Li</name>\n </author>\n <author>\n <name>Xinglu Wang</name>\n </author>\n <author>\n <name>Cong Feng</name>\n </author>\n <author>\n <name>Chunxu Zuo</name>\n </author>\n <author>\n <name>Yanan Wang</name>\n </author>\n <author>\n <name>Hei Lo</name>\n </author>\n <author>\n <name>Yufei Cui</name>\n </author>\n <author>\n <name>Bingji Wang</name>\n </author>\n <author>\n <name>Duo Cui</name>\n </author>\n <author>\n <name>Shuming Jing</name>\n </author>\n <author>\n <name>Yizhou Shan</name>\n </author>\n <author>\n <name>Ying Xiong</name>\n </author>\n <author>\n <name>Jiannan Wang</name>\n </author>\n <author>\n <name>Yong Zhang</name>\n </author>\n <author>\n <name>Zhenan Fan</name>\n </author>\n </entry>"
}