Research

Paper

AI LLM March 18, 2026

Swarm: Co-Activation Aware KVCache Offloading Across Multiple SSDs

Authors

Tuowei Wang, Liyun Chu, Ruwen Fan, Ju Ren

Abstract

The key-value (KV) cache has become the dominant contributor to memory consumption in large language model (LLM) inference. Although offloading KVCache from GPU high-bandwidth memory (HBM) to CPU DRAM alleviates device memory pressure, DRAM remains capacity-limited and costly for large, persistent workloads. Solid-state drives (SSDs) provide a cost-effective alternative, but naive SSD-based paging is fundamentally bandwidth-bound due to limited PCIe throughput and per-device bandwidth constraints. In this paper, we observe that KVCache activations in real-world workloads exhibit strong and stable correlations. We term this phenomenon KVCache Co-Activation, where accessing a KV entry is often accompanied by a stable and recurring set of other KV entries. Leveraging this property, we present Swarm, an SSD-based KVCache offloading system that converts bandwidth-bound single-device access into parallel I/O across multiple SSDs. Specifically, Swarm clusters co-activated KV entries offline and distributes the resulting clusters across SSDs using graph-based placement with selective replication to maximize parallel I/O bandwidth. At runtime, Swarm performs load-balanced cluster retrieval and dynamically adapts clustering and caching decisions to sustain high bandwidth utilization under evolving access patterns. Evaluations show that Swarm reduces I/O time by 2.41x and improves effective bandwidth utilization by 2.72x.

Metadata

arXiv ID: 2603.17803
Provider: ARXIV
Primary Category: cs.PF
Published: 2026-03-18
Fetched: 2026-03-19 06:01

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.17803v1</id>\n    <title>Swarm: Co-Activation Aware KVCache Offloading Across Multiple SSDs</title>\n    <updated>2026-03-18T14:59:16Z</updated>\n    <link href='https://arxiv.org/abs/2603.17803v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.17803v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>The key-value (KV) cache has become the dominant contributor to memory consumption in large language model (LLM) inference. Although offloading KVCache from GPU high-bandwidth memory (HBM) to CPU DRAM alleviates device memory pressure, DRAM remains capacity-limited and costly for large, persistent workloads. Solid-state drives (SSDs) provide a cost-effective alternative, but naive SSD-based paging is fundamentally bandwidth-bound due to limited PCIe throughput and per-device bandwidth constraints.\n  In this paper, we observe that KVCache activations in real-world workloads exhibit strong and stable correlations. We term this phenomenon KVCache Co-Activation, where accessing a KV entry is often accompanied by a stable and recurring set of other KV entries. Leveraging this property, we present Swarm, an SSD-based KVCache offloading system that converts bandwidth-bound single-device access into parallel I/O across multiple SSDs. Specifically, Swarm clusters co-activated KV entries offline and distributes the resulting clusters across SSDs using graph-based placement with selective replication to maximize parallel I/O bandwidth. At runtime, Swarm performs load-balanced cluster retrieval and dynamically adapts clustering and caching decisions to sustain high bandwidth utilization under evolving access patterns. Evaluations show that Swarm reduces I/O time by 2.41x and improves effective bandwidth utilization by 2.72x.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.PF'/>\n    <published>2026-03-18T14:59:16Z</published>\n    <arxiv:primary_category term='cs.PF'/>\n    <author>\n      <name>Tuowei Wang</name>\n    </author>\n    <author>\n      <name>Liyun Chu</name>\n    </author>\n    <author>\n      <name>Ruwen Fan</name>\n    </author>\n    <author>\n      <name>Ju Ren</name>\n    </author>\n  </entry>"
}