Research

Paper

AI LLM March 05, 2026

Hardware-Software Co-design for 3D-DRAM-based LLM Serving Accelerator

Authors

Cong Li, Yihan Yin, Chenhao Xue, Zhao Wang, Fujun Bai, Yixin Guo, Xiping Jiang, Qiang Wu, Yuan Xie, Guangyu Sun

Abstract

Large language models (LLMs) have been widely deployed for online generative services, where numerous LLM instances jointly handle workloads with fluctuating request arrival rates and variable request lengths. To efficiently execute coexisting compute-intensive and memory-intensive operators, near-memory processing (NMP) based computing paradigm has been extensively proposed. However, existing NMP designs adopt coarse-grained KV cache management and inflexible attention execution flow. Such limitations hinder these proposals from efficiently handling \textit{highly dynamic} LLM serving workloads, limiting their ability to accelerate LLM serving. To tackle these problems, we propose Helios, a Hybrid-bonding-based \uline{L}LM \uline{S}erving accelerator. Helios aims to bridge the fundamental gap between the dynamic nature of KV cache management in LLM serving and the distributed, non-uniform memory abstraction among NMP processing engines (PEs). To this end, we design both the intra-PE execution flow and the inter-PE communication primitives for distributed tiled attention execution. We further propose \textit{spatially-aware} KV cache allocation mechanism to balance the attention workload distribution while minimizing the inter-PE data transfer overhead. Compared with existing GPU/NMP designs, Helios achieves 3.25 times (geomean) speedup and 3.36 times (geomean) better energy efficiency, along with up to 72%/76% P50/P99 time-between-tokens degradation.

Metadata

arXiv ID: 2603.04797
Provider: ARXIV
Primary Category: cs.AR
Published: 2026-03-05
Fetched: 2026-03-07 04:35

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.04797v1</id>\n    <title>Hardware-Software Co-design for 3D-DRAM-based LLM Serving Accelerator</title>\n    <updated>2026-03-05T04:28:48Z</updated>\n    <link href='https://arxiv.org/abs/2603.04797v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.04797v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large language models (LLMs) have been widely deployed for online generative services, where numerous LLM instances jointly handle workloads with fluctuating request arrival rates and variable request lengths. To efficiently execute coexisting compute-intensive and memory-intensive operators, near-memory processing (NMP) based computing paradigm has been extensively proposed. However, existing NMP designs adopt coarse-grained KV cache management and inflexible attention execution flow. Such limitations hinder these proposals from efficiently handling \\textit{highly dynamic} LLM serving workloads, limiting their ability to accelerate LLM serving.\n  To tackle these problems, we propose Helios, a Hybrid-bonding-based \\uline{L}LM \\uline{S}erving accelerator. Helios aims to bridge the fundamental gap between the dynamic nature of KV cache management in LLM serving and the distributed, non-uniform memory abstraction among NMP processing engines (PEs). To this end, we design both the intra-PE execution flow and the inter-PE communication primitives for distributed tiled attention execution. We further propose \\textit{spatially-aware} KV cache allocation mechanism to balance the attention workload distribution while minimizing the inter-PE data transfer overhead. Compared with existing GPU/NMP designs, Helios achieves 3.25 times (geomean) speedup and 3.36 times (geomean) better energy efficiency, along with up to 72%/76% P50/P99 time-between-tokens degradation.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AR'/>\n    <published>2026-03-05T04:28:48Z</published>\n    <arxiv:primary_category term='cs.AR'/>\n    <author>\n      <name>Cong Li</name>\n    </author>\n    <author>\n      <name>Yihan Yin</name>\n    </author>\n    <author>\n      <name>Chenhao Xue</name>\n    </author>\n    <author>\n      <name>Zhao Wang</name>\n    </author>\n    <author>\n      <name>Fujun Bai</name>\n    </author>\n    <author>\n      <name>Yixin Guo</name>\n    </author>\n    <author>\n      <name>Xiping Jiang</name>\n    </author>\n    <author>\n      <name>Qiang Wu</name>\n    </author>\n    <author>\n      <name>Yuan Xie</name>\n    </author>\n    <author>\n      <name>Guangyu Sun</name>\n    </author>\n  </entry>"
}