Research

Paper

AI LLM March 05, 2026

PromptTuner: SLO-Aware Elastic System for LLM Prompt Tuning

Authors

Wei Gao, Peng Sun, Dmitrii Ustiugov, Tianwei Zhang, Yonggang Wen

Abstract

Prompt tuning has become a prominent strategy for enhancing the performance of Large Language Models (LLMs) on downstream tasks. Many IT enterprises now offer Prompt-Tuning-as-a-Service to fulfill the growing demand for prompt tuning LLMs on downstream tasks. Their primary objective is to satisfy users Service Level Objectives (SLOs) while reducing resource provisioning costs. Nevertheless, our characterization analysis for existing deep learning resource management systems reveals that they are insufficient to optimize these objectives for LLM prompt tuning workloads. In this paper, we introduce PromptTuner, an SLO-aware elastic system to optimize LLM prompt tuning. It contains two innovations. (1) We design a Prompt Bank to identify efficient initial prompts to expedite the convergence of prompt tuning. (2) We develop aWorkload Scheduler to enable fast resource allocation to reduce the SLO violation and resource costs. In our evaluation, PromptTuner reduces SLO violations by 4.0x and 7.9x, and lowers costs by 1.6x and 4.5x, compared to INFless and ElasticFlow respectively.

Metadata

arXiv ID: 2603.05087
Provider: ARXIV
Primary Category: cs.DC
Published: 2026-03-05
Fetched: 2026-03-06 14:20

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.05087v1</id>\n    <title>PromptTuner: SLO-Aware Elastic System for LLM Prompt Tuning</title>\n    <updated>2026-03-05T11:58:55Z</updated>\n    <link href='https://arxiv.org/abs/2603.05087v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.05087v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Prompt tuning has become a prominent strategy for enhancing the performance of Large Language Models (LLMs) on downstream tasks. Many IT enterprises now offer Prompt-Tuning-as-a-Service to fulfill the growing demand for prompt tuning LLMs on downstream tasks. Their primary objective is to satisfy users Service Level Objectives (SLOs) while reducing resource provisioning costs. Nevertheless, our characterization analysis for existing deep learning resource management systems reveals that they are insufficient to optimize these objectives for LLM prompt tuning workloads.\n  In this paper, we introduce PromptTuner, an SLO-aware elastic system to optimize LLM prompt tuning. It contains two innovations. (1) We design a Prompt Bank to identify efficient initial prompts to expedite the convergence of prompt tuning. (2) We develop aWorkload Scheduler to enable fast resource allocation to reduce the SLO violation and resource costs. In our evaluation, PromptTuner reduces SLO violations by 4.0x and 7.9x, and lowers costs by 1.6x and 4.5x, compared to INFless and ElasticFlow respectively.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.DC'/>\n    <published>2026-03-05T11:58:55Z</published>\n    <arxiv:primary_category term='cs.DC'/>\n    <author>\n      <name>Wei Gao</name>\n    </author>\n    <author>\n      <name>Peng Sun</name>\n    </author>\n    <author>\n      <name>Dmitrii Ustiugov</name>\n    </author>\n    <author>\n      <name>Tianwei Zhang</name>\n    </author>\n    <author>\n      <name>Yonggang Wen</name>\n    </author>\n  </entry>"
}