Research

Paper

AI LLM February 24, 2026

ReviveMoE: Fast Recovery for Hardware Failures in Large-Scale MoE LLM Inference Deployments

Authors

Haley Li, Xinglu Wang, Cong Feng, Chunxu Zuo, Yanan Wang, Hei Lo, Yufei Cui, Bingji Wang, Duo Cui, Shuming Jing, Yizhou Shan, Ying Xiong, Jiannan Wang, Yong Zhang, Zhenan Fan

Abstract

As LLM deployments scale over more hardware, the probability of a single failure in a system increases significantly, and cloud operators must consider robust countermeasures to handle these inevitable failures. A common recovery approach is to simply restart the LLM serving instance; however, this is costly in model-as-a-service (MaaS) inference settings, where reloading model weights and recompiling computation graphs can introduce significant delays to incoming requests. We propose ReviveMoE, a method for rapid failure recovery in large-scale LLM deployments without restarting the serving instance. ReviveMoE is designed to support both the traditional LLM architecture, which collocates MoE and attention on the same hardware, and the disaggregated architectures, which separate MoE from attention. Integrated into Huawei Cloud's MaaS, ReviveMoE is built on top of Huawei's xDeepServe serving platform and the XCCL communications library.

Metadata

arXiv ID: 2602.21140
Provider: ARXIV
Primary Category: cs.DC
Published: 2026-02-24
Fetched: 2026-02-25 06:05

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.21140v1</id>\n    <title>ReviveMoE: Fast Recovery for Hardware Failures in Large-Scale MoE LLM Inference Deployments</title>\n    <updated>2026-02-24T17:39:41Z</updated>\n    <link href='https://arxiv.org/abs/2602.21140v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.21140v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>As LLM deployments scale over more hardware, the probability of a single failure in a system increases significantly, and cloud operators must consider robust countermeasures to handle these inevitable failures. A common recovery approach is to simply restart the LLM serving instance; however, this is costly in model-as-a-service (MaaS) inference settings, where reloading model weights and recompiling computation graphs can introduce significant delays to incoming requests. We propose ReviveMoE, a method for rapid failure recovery in large-scale LLM deployments without restarting the serving instance. ReviveMoE is designed to support both the traditional LLM architecture, which collocates MoE and attention on the same hardware, and the disaggregated architectures, which separate MoE from attention. Integrated into Huawei Cloud's MaaS, ReviveMoE is built on top of Huawei's xDeepServe serving platform and the XCCL communications library.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.DC'/>\n    <published>2026-02-24T17:39:41Z</published>\n    <arxiv:comment>21 pages, 6 figures</arxiv:comment>\n    <arxiv:primary_category term='cs.DC'/>\n    <author>\n      <name>Haley Li</name>\n    </author>\n    <author>\n      <name>Xinglu Wang</name>\n    </author>\n    <author>\n      <name>Cong Feng</name>\n    </author>\n    <author>\n      <name>Chunxu Zuo</name>\n    </author>\n    <author>\n      <name>Yanan Wang</name>\n    </author>\n    <author>\n      <name>Hei Lo</name>\n    </author>\n    <author>\n      <name>Yufei Cui</name>\n    </author>\n    <author>\n      <name>Bingji Wang</name>\n    </author>\n    <author>\n      <name>Duo Cui</name>\n    </author>\n    <author>\n      <name>Shuming Jing</name>\n    </author>\n    <author>\n      <name>Yizhou Shan</name>\n    </author>\n    <author>\n      <name>Ying Xiong</name>\n    </author>\n    <author>\n      <name>Jiannan Wang</name>\n    </author>\n    <author>\n      <name>Yong Zhang</name>\n    </author>\n    <author>\n      <name>Zhenan Fan</name>\n    </author>\n  </entry>"
}