Research

Paper

AI LLM March 13, 2026

LightMoE: Reducing Mixture-of-Experts Redundancy through Expert Replacing

Authors

Jiawei Hao, Zhiwei Hao, Jianyuan Guo, Li Shen, Yong Luo, Han Hu, Dan Zeng

Abstract

Mixture-of-Experts (MoE) based Large Language Models (LLMs) have demonstrated impressive performance and computational efficiency. However, their deployment is often constrained by substantial memory demands, primarily due to the need to load numerous expert modules. While existing expert compression techniques like pruning or merging attempt to mitigate this, they often suffer from irreversible knowledge loss or high training overhead. In this paper, we propose a novel expert compression paradigm termed expert replacing, which replaces redundant experts with parameter-efficient modules and recovers their capabilities with low training costs. We find that even a straightforward baseline of this paradigm yields promising performance. Building on this foundation, we introduce LightMoE, a framework that enhances the paradigm by introducing adaptive expert selection, hierarchical expert construction, and an annealed recovery strategy. Experimental results show that LightMoE matches the performance of LoRA fine-tuning at a 30% compression ratio. Even under a more aggressive 50% compression rate, it outperforms existing methods and achieves average performance improvements of 5.6% across five diverse tasks. These findings demonstrate that LightMoE strikes a superior balance among memory efficiency, training efficiency, and model performance.

Metadata

arXiv ID: 2603.12645
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-13
Fetched: 2026-03-16 06:01

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.12645v1</id>\n    <title>LightMoE: Reducing Mixture-of-Experts Redundancy through Expert Replacing</title>\n    <updated>2026-03-13T04:33:08Z</updated>\n    <link href='https://arxiv.org/abs/2603.12645v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.12645v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Mixture-of-Experts (MoE) based Large Language Models (LLMs) have demonstrated impressive performance and computational efficiency. However, their deployment is often constrained by substantial memory demands, primarily due to the need to load numerous expert modules. While existing expert compression techniques like pruning or merging attempt to mitigate this, they often suffer from irreversible knowledge loss or high training overhead. In this paper, we propose a novel expert compression paradigm termed expert replacing, which replaces redundant experts with parameter-efficient modules and recovers their capabilities with low training costs. We find that even a straightforward baseline of this paradigm yields promising performance. Building on this foundation, we introduce LightMoE, a framework that enhances the paradigm by introducing adaptive expert selection, hierarchical expert construction, and an annealed recovery strategy. Experimental results show that LightMoE matches the performance of LoRA fine-tuning at a 30% compression ratio. Even under a more aggressive 50% compression rate, it outperforms existing methods and achieves average performance improvements of 5.6% across five diverse tasks. These findings demonstrate that LightMoE strikes a superior balance among memory efficiency, training efficiency, and model performance.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-13T04:33:08Z</published>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Jiawei Hao</name>\n    </author>\n    <author>\n      <name>Zhiwei Hao</name>\n    </author>\n    <author>\n      <name>Jianyuan Guo</name>\n    </author>\n    <author>\n      <name>Li Shen</name>\n    </author>\n    <author>\n      <name>Yong Luo</name>\n    </author>\n    <author>\n      <name>Han Hu</name>\n    </author>\n    <author>\n      <name>Dan Zeng</name>\n    </author>\n  </entry>"
}