Research

Paper

AI LLM March 16, 2026

LMetric: Simple is Better - Multiplication May Be All You Need for LLM Request Scheduling

Authors

Dingyan Zhang, Jinbo Han, Kaixi Zhang, Xingda Wei, Sijie Shen, Chenguang Fang, Wenyuan Yu, Jingren Zhou, Rong Chen

Abstract

High-quality LLM request scheduling requires achieving two key objectives: whether the routed instance has KV$ to accelerate the request execution and whether the workload is balanced across instances. Achieving both objectives is challenging because pursuing one objective may compromise the other. Current approaches adopt various combinators (e.g., linear combinations) to compute a scheduling score combining indicators for the two objectives, which are complex in that they either require significant workload-specific hyperparameter tuning or model-hardware-aware simulator development, and could still lead to suboptimal performance. In this paper, we show that using a simple multiplication of two carefully chosen indicators-one for KV$-aware (new prefill tokens if routed to an instance) and one for load balancing-aware (current batch size of the instance)-as the scheduling score can simultaneously achieve both objectives well without any hyperparameter tuning. The key idea is that the multiplied score considers both objectives in a manner similar to a linear combination, with a nice property that the original hyperparameters are canceled out during comparison so we don't need tuning to find the best parameters. The two indicators are chosen based on our analysis of LLM characteristics, and our extensive experiments show that this simple approach can reduce TTFT by 92% and 52%, and TPOT by 21% and 20%, compared to vLLM-v1 and a production scheduler on real-world workloads covering chatbots, API calls, and coding agents. We also mathematically derive the conditions under which multiplication may fail, and find that such conditions are extremely rare in practice and can be detected (and mitigated) beforehand.

Metadata

arXiv ID: 2603.15202
Provider: ARXIV
Primary Category: cs.DC
Published: 2026-03-16
Fetched: 2026-03-17 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.15202v1</id>\n    <title>LMetric: Simple is Better - Multiplication May Be All You Need for LLM Request Scheduling</title>\n    <updated>2026-03-16T12:43:32Z</updated>\n    <link href='https://arxiv.org/abs/2603.15202v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.15202v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>High-quality LLM request scheduling requires achieving two key objectives: whether the routed instance has KV$ to accelerate the request execution and whether the workload is balanced across instances. Achieving both objectives is challenging because pursuing one objective may compromise the other. Current approaches adopt various combinators (e.g., linear combinations) to compute a scheduling score combining indicators for the two objectives, which are complex in that they either require significant workload-specific hyperparameter tuning or model-hardware-aware simulator development, and could still lead to suboptimal performance. In this paper, we show that using a simple multiplication of two carefully chosen indicators-one for KV$-aware (new prefill tokens if routed to an instance) and one for load balancing-aware (current batch size of the instance)-as the scheduling score can simultaneously achieve both objectives well without any hyperparameter tuning. The key idea is that the multiplied score considers both objectives in a manner similar to a linear combination, with a nice property that the original hyperparameters are canceled out during comparison so we don't need tuning to find the best parameters. The two indicators are chosen based on our analysis of LLM characteristics, and our extensive experiments show that this simple approach can reduce TTFT by 92% and 52%, and TPOT by 21% and 20%, compared to vLLM-v1 and a production scheduler on real-world workloads covering chatbots, API calls, and coding agents. We also mathematically derive the conditions under which multiplication may fail, and find that such conditions are extremely rare in practice and can be detected (and mitigated) beforehand.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.DC'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.OS'/>\n    <published>2026-03-16T12:43:32Z</published>\n    <arxiv:primary_category term='cs.DC'/>\n    <author>\n      <name>Dingyan Zhang</name>\n    </author>\n    <author>\n      <name>Jinbo Han</name>\n    </author>\n    <author>\n      <name>Kaixi Zhang</name>\n    </author>\n    <author>\n      <name>Xingda Wei</name>\n    </author>\n    <author>\n      <name>Sijie Shen</name>\n    </author>\n    <author>\n      <name>Chenguang Fang</name>\n    </author>\n    <author>\n      <name>Wenyuan Yu</name>\n    </author>\n    <author>\n      <name>Jingren Zhou</name>\n    </author>\n    <author>\n      <name>Rong Chen</name>\n    </author>\n  </entry>"
}