Research

Paper

AI LLM March 10, 2026

WVA: A Global Optimization Control Plane for llmd

Authors

Abhishek Malvankar, Lionel Villard, Mohammed Abdi, Evgeny Shindin, Braulio Dumba, Vishakha Ramani, Asser Tantawi, Tamar Eilam

Abstract

As Large Language Models (LLMs) scale to handle massive concurrent traffic, optimizing the infrastructure required for inference has become a primary challenge. To manage the high cost of GPU resources while ensuring strict service-level objectives (SLOs), operators increasingly deploy models across heterogeneous hardware clusters that multiplex latency-sensitive online requests and throughput-oriented offline requests. However, traditional resource-centric autoscalers like the Kubernetes horizontal pod autoscaler (HPA) do not consider application-specific SLOs, hardware heterogeneity, or internal engine state (like KV cache utilization) globally. This leads to unnecessary scaling, severe resource underutilization, and disrupted stateful inference. To address these limitations, we introduce the Workload Variant Autoscaler (WVA), a specialized control plane co-designed with \texttt{llmd} that tightly couples scaling decisions with the inference server's internal saturation state. By utilizing proactive headroom-based scaling and fragmentation-aware scale-down, our experiments demonstrate that WVA achieves a \textbf{37\% improvement in effective throughput} and a \textbf{10x reduction in request failures} compared to HPA. Furthermore, WVA's cost-aware tiering intrinsically reduces overall power consumption by prioritizing lower-cost, energy-efficient hardware variants over homogeneous scaling on high-end accelerators.

Metadata

arXiv ID: 2603.09730
Provider: ARXIV
Primary Category: cs.ET
Published: 2026-03-10
Fetched: 2026-03-11 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.09730v1</id>\n    <title>WVA: A Global Optimization Control Plane for llmd</title>\n    <updated>2026-03-10T14:33:23Z</updated>\n    <link href='https://arxiv.org/abs/2603.09730v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.09730v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>As Large Language Models (LLMs) scale to handle massive concurrent traffic, optimizing the infrastructure required for inference has become a primary challenge. To manage the high cost of GPU resources while ensuring strict service-level objectives (SLOs), operators increasingly deploy models across heterogeneous hardware clusters that multiplex latency-sensitive online requests and throughput-oriented offline requests. However, traditional resource-centric autoscalers like the Kubernetes horizontal pod autoscaler (HPA) do not consider application-specific SLOs, hardware heterogeneity, or internal engine state (like KV cache utilization) globally. This leads to unnecessary scaling, severe resource underutilization, and disrupted stateful inference. To address these limitations, we introduce the Workload Variant Autoscaler (WVA), a specialized control plane co-designed with \\texttt{llmd} that tightly couples scaling decisions with the inference server's internal saturation state. By utilizing proactive headroom-based scaling and fragmentation-aware scale-down, our experiments demonstrate that WVA achieves a \\textbf{37\\% improvement in effective throughput} and a \\textbf{10x reduction in request failures} compared to HPA. Furthermore, WVA's cost-aware tiering intrinsically reduces overall power consumption by prioritizing lower-cost, energy-efficient hardware variants over homogeneous scaling on high-end accelerators.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.ET'/>\n    <published>2026-03-10T14:33:23Z</published>\n    <arxiv:primary_category term='cs.ET'/>\n    <author>\n      <name>Abhishek Malvankar</name>\n    </author>\n    <author>\n      <name>Lionel Villard</name>\n    </author>\n    <author>\n      <name>Mohammed Abdi</name>\n    </author>\n    <author>\n      <name>Evgeny Shindin</name>\n    </author>\n    <author>\n      <name>Braulio Dumba</name>\n    </author>\n    <author>\n      <name>Vishakha Ramani</name>\n    </author>\n    <author>\n      <name>Asser Tantawi</name>\n    </author>\n    <author>\n      <name>Tamar Eilam</name>\n    </author>\n  </entry>"
}