Paper
SpecForge: A Flexible and Efficient Open-Source Training Framework for Speculative Decoding
Authors
Shenggui Li, Chao Wang, Yikai Zhu, Yubo Wang, Fan Yin, Shuai Shi, Yefei Chen, Xiaomin Dong, Qiaoling Chen, Jin Pan, Ji Li, Laixin Xie, Yineng Zhang, Lei Yu, Yonggang Wen, Ivor Tsang, Tianwei Zhang
Abstract
Large language models incur high inference latency due to sequential autoregressive decoding. Speculative decoding alleviates this bottleneck by using a lightweight draft model to propose multiple tokens for batched verification. However, its adoption has been limited by the lack of high-quality draft models and scalable training infrastructure. We introduce SpecForge, an open-source, production-oriented framework for training speculative decoding models with full support for EAGLE-3. SpecForge incorporates target-draft decoupling, hybrid parallelism, optimized training kernels, and integration with production-grade inference engines, enabling up to 9.9x faster EAGLE-3 training for Qwen3-235B-A22B. In addition, we release SpecBundle, a suite of production-grade EAGLE-3 draft models trained with SpecForge for mainstream open-source LLMs. Through a systematic study of speculative decoding training recipes, SpecBundle addresses the scarcity of high-quality drafts in the community, and our draft models achieve up to 4.48x end-to-end inference speedup on SGLang, establishing SpecForge as a practical foundation for real-world speculative decoding deployment.
Metadata
Related papers
Fractal universe and quantum gravity made simple
Fabio Briscese, Gianluca Calcagni • 2026-03-25
POLY-SIM: Polyglot Speaker Identification with Missing Modality Grand Challenge 2026 Evaluation Plan
Marta Moscati, Muhammad Saad Saeed, Marina Zanoni, Mubashir Noman, Rohan Kuma... • 2026-03-25
LensWalk: Agentic Video Understanding by Planning How You See in Videos
Keliang Li, Yansong Li, Hongze Shen, Mengdi Liu, Hong Chang, Shiguang Shan • 2026-03-25
Orientation Reconstruction of Proteins using Coulomb Explosions
Tomas André, Alfredo Bellisario, Nicusor Timneanu, Carl Caleman • 2026-03-25
The role of spatial context and multitask learning in the detection of organic and conventional farming systems based on Sentinel-2 time series
Jan Hemmerling, Marcel Schwieder, Philippe Rufin, Leon-Friedrich Thomas, Mire... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.18567v1</id>\n <title>SpecForge: A Flexible and Efficient Open-Source Training Framework for Speculative Decoding</title>\n <updated>2026-03-19T07:28:56Z</updated>\n <link href='https://arxiv.org/abs/2603.18567v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.18567v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>Large language models incur high inference latency due to sequential autoregressive decoding. Speculative decoding alleviates this bottleneck by using a lightweight draft model to propose multiple tokens for batched verification. However, its adoption has been limited by the lack of high-quality draft models and scalable training infrastructure. We introduce SpecForge, an open-source, production-oriented framework for training speculative decoding models with full support for EAGLE-3. SpecForge incorporates target-draft decoupling, hybrid parallelism, optimized training kernels, and integration with production-grade inference engines, enabling up to 9.9x faster EAGLE-3 training for Qwen3-235B-A22B. In addition, we release SpecBundle, a suite of production-grade EAGLE-3 draft models trained with SpecForge for mainstream open-source LLMs. Through a systematic study of speculative decoding training recipes, SpecBundle addresses the scarcity of high-quality drafts in the community, and our draft models achieve up to 4.48x end-to-end inference speedup on SGLang, establishing SpecForge as a practical foundation for real-world speculative decoding deployment.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n <published>2026-03-19T07:28:56Z</published>\n <arxiv:primary_category term='cs.LG'/>\n <author>\n <name>Shenggui Li</name>\n </author>\n <author>\n <name>Chao Wang</name>\n </author>\n <author>\n <name>Yikai Zhu</name>\n </author>\n <author>\n <name>Yubo Wang</name>\n </author>\n <author>\n <name>Fan Yin</name>\n </author>\n <author>\n <name>Shuai Shi</name>\n </author>\n <author>\n <name>Yefei Chen</name>\n </author>\n <author>\n <name>Xiaomin Dong</name>\n </author>\n <author>\n <name>Qiaoling Chen</name>\n </author>\n <author>\n <name>Jin Pan</name>\n </author>\n <author>\n <name>Ji Li</name>\n </author>\n <author>\n <name>Laixin Xie</name>\n </author>\n <author>\n <name>Yineng Zhang</name>\n </author>\n <author>\n <name>Lei Yu</name>\n </author>\n <author>\n <name>Yonggang Wen</name>\n </author>\n <author>\n <name>Ivor Tsang</name>\n </author>\n <author>\n <name>Tianwei Zhang</name>\n </author>\n </entry>"
}