Paper
WVA: A Global Optimization Control Plane for llmd
Authors
Abhishek Malvankar, Lionel Villard, Mohammed Abdi, Evgeny Shindin, Braulio Dumba, Vishakha Ramani, Asser Tantawi, Tamar Eilam
Abstract
As Large Language Models (LLMs) scale to handle massive concurrent traffic, optimizing the infrastructure required for inference has become a primary challenge. To manage the high cost of GPU resources while ensuring strict service-level objectives (SLOs), operators increasingly deploy models across heterogeneous hardware clusters that multiplex latency-sensitive online requests and throughput-oriented offline requests. However, traditional resource-centric autoscalers like the Kubernetes horizontal pod autoscaler (HPA) do not consider application-specific SLOs, hardware heterogeneity, or internal engine state (like KV cache utilization) globally. This leads to unnecessary scaling, severe resource underutilization, and disrupted stateful inference. To address these limitations, we introduce the Workload Variant Autoscaler (WVA), a specialized control plane co-designed with \texttt{llmd} that tightly couples scaling decisions with the inference server's internal saturation state. By utilizing proactive headroom-based scaling and fragmentation-aware scale-down, our experiments demonstrate that WVA achieves a \textbf{37\% improvement in effective throughput} and a \textbf{10x reduction in request failures} compared to HPA. Furthermore, WVA's cost-aware tiering intrinsically reduces overall power consumption by prioritizing lower-cost, energy-efficient hardware variants over homogeneous scaling on high-end accelerators.
Metadata
Related papers
Gen-Searcher: Reinforcing Agentic Search for Image Generation
Kaituo Feng, Manyuan Zhang, Shuang Chen, Yunlong Lin, Kaixuan Fan, Yilei Jian... • 2026-03-30
On-the-fly Repulsion in the Contextual Space for Rich Diversity in Diffusion Transformers
Omer Dahary, Benaya Koren, Daniel Garibi, Daniel Cohen-Or • 2026-03-30
Graphilosophy: Graph-Based Digital Humanities Computing with The Four Books
Minh-Thu Do, Quynh-Chau Le-Tran, Duc-Duy Nguyen-Mai, Thien-Trang Nguyen, Khan... • 2026-03-30
ParaSpeechCLAP: A Dual-Encoder Speech-Text Model for Rich Stylistic Language-Audio Pretraining
Anuj Diwan, Eunsol Choi, David Harwath • 2026-03-30
RAD-AI: Rethinking Architecture Documentation for AI-Augmented Ecosystems
Oliver Aleksander Larsen, Mahyar T. Moghaddam • 2026-03-30
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.09730v1</id>\n <title>WVA: A Global Optimization Control Plane for llmd</title>\n <updated>2026-03-10T14:33:23Z</updated>\n <link href='https://arxiv.org/abs/2603.09730v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.09730v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>As Large Language Models (LLMs) scale to handle massive concurrent traffic, optimizing the infrastructure required for inference has become a primary challenge. To manage the high cost of GPU resources while ensuring strict service-level objectives (SLOs), operators increasingly deploy models across heterogeneous hardware clusters that multiplex latency-sensitive online requests and throughput-oriented offline requests. However, traditional resource-centric autoscalers like the Kubernetes horizontal pod autoscaler (HPA) do not consider application-specific SLOs, hardware heterogeneity, or internal engine state (like KV cache utilization) globally. This leads to unnecessary scaling, severe resource underutilization, and disrupted stateful inference. To address these limitations, we introduce the Workload Variant Autoscaler (WVA), a specialized control plane co-designed with \\texttt{llmd} that tightly couples scaling decisions with the inference server's internal saturation state. By utilizing proactive headroom-based scaling and fragmentation-aware scale-down, our experiments demonstrate that WVA achieves a \\textbf{37\\% improvement in effective throughput} and a \\textbf{10x reduction in request failures} compared to HPA. Furthermore, WVA's cost-aware tiering intrinsically reduces overall power consumption by prioritizing lower-cost, energy-efficient hardware variants over homogeneous scaling on high-end accelerators.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.ET'/>\n <published>2026-03-10T14:33:23Z</published>\n <arxiv:primary_category term='cs.ET'/>\n <author>\n <name>Abhishek Malvankar</name>\n </author>\n <author>\n <name>Lionel Villard</name>\n </author>\n <author>\n <name>Mohammed Abdi</name>\n </author>\n <author>\n <name>Evgeny Shindin</name>\n </author>\n <author>\n <name>Braulio Dumba</name>\n </author>\n <author>\n <name>Vishakha Ramani</name>\n </author>\n <author>\n <name>Asser Tantawi</name>\n </author>\n <author>\n <name>Tamar Eilam</name>\n </author>\n </entry>"
}