Research

Paper

AI LLM March 19, 2026

A Pipelined Collaborative Speculative Decoding Framework for Efficient Edge-Cloud LLM Inference

Authors

Yida Zhang, Zhiyong Gao, Shuaibing Yue, Jie Li, Rui Wang

Abstract

Recent advancements and widespread adoption of Large Language Models (LLMs) in both industry and academia have catalyzed significant demand for LLM serving. However, traditional cloud services incur high costs, while on-device inference alone faces challenges due to limited resources. Edge-cloud collaboration emerges as a key research direction to combine the strengths of both paradigms, yet efficiently utilizing limited network bandwidth while fully leveraging and balancing the computational capabilities of edge devices and the cloud remains an open problem. To address these challenges, we propose Pipelined Collaborative Speculative Decoding Framework (PicoSpec), a novel, general-purpose, and training-free speculative decoding framework for LLM edge-cloud collaborative inference. We design an asynchronous pipeline that resolves the mutual waiting problem inherent in vanilla speculative decoding within edge collaboration scenarios, which concurrently executes a Small Language Model (SLM) on the edge device and a LLM in the cloud. Meanwhile, to mitigate the significant communication latency caused by transmitting vocabulary distributions, we introduce separate rejection sampling with sparse compression, which completes the rejection sampling with only a one-time cost of transmitting the compressed vocabulary. Experimental results demonstrate that our solution outperforms baseline and existing methods, achieving up to 2.9 speedup.

Metadata

arXiv ID: 2603.19133
Provider: ARXIV
Primary Category: cs.DC
Published: 2026-03-19
Fetched: 2026-03-20 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.19133v1</id>\n    <title>A Pipelined Collaborative Speculative Decoding Framework for Efficient Edge-Cloud LLM Inference</title>\n    <updated>2026-03-19T16:51:32Z</updated>\n    <link href='https://arxiv.org/abs/2603.19133v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.19133v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Recent advancements and widespread adoption of Large Language Models (LLMs) in both industry and academia have catalyzed significant demand for LLM serving. However, traditional cloud services incur high costs, while on-device inference alone faces challenges due to limited resources. Edge-cloud collaboration emerges as a key research direction to combine the strengths of both paradigms, yet efficiently utilizing limited network bandwidth while fully leveraging and balancing the computational capabilities of edge devices and the cloud remains an open problem. To address these challenges, we propose Pipelined Collaborative Speculative Decoding Framework (PicoSpec), a novel, general-purpose, and training-free speculative decoding framework for LLM edge-cloud collaborative inference. We design an asynchronous pipeline that resolves the mutual waiting problem inherent in vanilla speculative decoding within edge collaboration scenarios, which concurrently executes a Small Language Model (SLM) on the edge device and a LLM in the cloud. Meanwhile, to mitigate the significant communication latency caused by transmitting vocabulary distributions, we introduce separate rejection sampling with sparse compression, which completes the rejection sampling with only a one-time cost of transmitting the compressed vocabulary. Experimental results demonstrate that our solution outperforms baseline and existing methods, achieving up to 2.9 speedup.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.DC'/>\n    <published>2026-03-19T16:51:32Z</published>\n    <arxiv:comment>8 pages, 6 figures</arxiv:comment>\n    <arxiv:primary_category term='cs.DC'/>\n    <author>\n      <name>Yida Zhang</name>\n    </author>\n    <author>\n      <name>Zhiyong Gao</name>\n    </author>\n    <author>\n      <name>Shuaibing Yue</name>\n    </author>\n    <author>\n      <name>Jie Li</name>\n    </author>\n    <author>\n      <name>Rui Wang</name>\n    </author>\n  </entry>"
}