Research

Paper

AI LLM February 27, 2026

SLA-Aware Distributed LLM Inference Across Device-RAN-Cloud

Authors

Hariz Yet, Nguyen Thanh Tam, Mao V. Ngo, Lim Yi Shen, Lin Wei, Jihong Park, Binbin Chen, Tony Q. S. Quek

Abstract

Embodied AI requires sub-second inference near the Radio Access Network (RAN), but deployments span heterogeneous tiers (on-device, RAN-edge, cloud) and must not disrupt real-time baseband processing. We report measurements from a 5G Standalone (SA) AI-RAN testbed using a fixed baseline policy for repeatability. The setup includes an on-device tier, a three-node RAN-edge cluster co-hosting a containerized 5G RAN, and a cloud tier. We find that on-device execution remains multi-second and fails to meet sub-second budgets. At the RAN edge, SLA feasibility is primarily determined by model variant choice: quantized models concentrate below 0.5\,s, while unquantized and some larger quantized models incur deadline misses due to stalls and queuing. In the cloud tier, meeting a 0.5\,s deadline is challenging on the measured WAN path (up to 32.9\% of requests complete within 0.5\,s), but all evaluated variants meet a 1.0\,s deadline (100\% within 1.0\,s). Under saturated downlink traffic and up to $N{=}20$ concurrent inference clients, Multi-Instance GPU (MIG) isolation preserves baseband timing-health proxies, supporting safe co-location under fixed partitioning.

Metadata

arXiv ID: 2602.23722
Provider: ARXIV
Primary Category: cs.NI
Published: 2026-02-27
Fetched: 2026-03-02 06:04

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.23722v1</id>\n    <title>SLA-Aware Distributed LLM Inference Across Device-RAN-Cloud</title>\n    <updated>2026-02-27T06:43:47Z</updated>\n    <link href='https://arxiv.org/abs/2602.23722v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.23722v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Embodied AI requires sub-second inference near the Radio Access Network (RAN), but deployments span heterogeneous tiers (on-device, RAN-edge, cloud) and must not disrupt real-time baseband processing. We report measurements from a 5G Standalone (SA) AI-RAN testbed using a fixed baseline policy for repeatability. The setup includes an on-device tier, a three-node RAN-edge cluster co-hosting a containerized 5G RAN, and a cloud tier. We find that on-device execution remains multi-second and fails to meet sub-second budgets. At the RAN edge, SLA feasibility is primarily determined by model variant choice: quantized models concentrate below 0.5\\,s, while unquantized and some larger quantized models incur deadline misses due to stalls and queuing. In the cloud tier, meeting a 0.5\\,s deadline is challenging on the measured WAN path (up to 32.9\\% of requests complete within 0.5\\,s), but all evaluated variants meet a 1.0\\,s deadline (100\\% within 1.0\\,s). Under saturated downlink traffic and up to $N{=}20$ concurrent inference clients, Multi-Instance GPU (MIG) isolation preserves baseband timing-health proxies, supporting safe co-location under fixed partitioning.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.NI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-02-27T06:43:47Z</published>\n    <arxiv:comment>Accepted to IEEE INFOCOM Workshops 2026 (6G AI-RAN 2026), Tokyo, Japan. This arXiv version is a preprint / author version</arxiv:comment>\n    <arxiv:primary_category term='cs.NI'/>\n    <author>\n      <name>Hariz Yet</name>\n    </author>\n    <author>\n      <name>Nguyen Thanh Tam</name>\n    </author>\n    <author>\n      <name>Mao V. Ngo</name>\n    </author>\n    <author>\n      <name>Lim Yi Shen</name>\n    </author>\n    <author>\n      <name>Lin Wei</name>\n    </author>\n    <author>\n      <name>Jihong Park</name>\n    </author>\n    <author>\n      <name>Binbin Chen</name>\n    </author>\n    <author>\n      <name>Tony Q. S. Quek</name>\n    </author>\n  </entry>"
}