Research

Paper

AI LLM March 16, 2026

DUET: Disaggregated Hybrid Mamba-Transformer LLMs with Prefill and Decode-Specific Packages

Authors

Alish Kanani, Sangwan Lee, Han Lyu, Jiahao Lin, Jaehyun Park, Umit Y. Ogras

Abstract

Large language models operate in distinct compute-bound prefill followed by memory bandwidth-bound decode phases. Hybrid Mamba-Transformer models inherit this asymmetry while adding state space model (SSM) recurrences and element-wise operations that map poorly to matmul-centric accelerators. This mismatch causes performance bottlenecks, showing that a homogeneous architecture cannot satisfy all requirements. We introduce DUET, a disaggregated accelerator that assigns prefill and decode phases to specialized packages. The Prefill package utilizes systolic array chiplets with off-package memory for efficient large matrix multiplications and long-sequence SSMs. The Decode package utilizes vector-unit arrays with high-bandwidth in-package memory to accelerate token-by-token SSM and vector-matrix multiplications. Both architectures are runtime-configurable to support hybrid models with mixed Mamba and attention layers. Evaluations on Nemotron-H-56B, Zamba2-7B, and Llama3-8B across four workloads show that DUET achieves 4x faster time to first token, 1.4x higher throughput, and 1.5x lower time between tokens over the B200 GPU.

Metadata

arXiv ID: 2603.15530
Provider: ARXIV
Primary Category: cs.AR
Published: 2026-03-16
Fetched: 2026-03-17 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.15530v1</id>\n    <title>DUET: Disaggregated Hybrid Mamba-Transformer LLMs with Prefill and Decode-Specific Packages</title>\n    <updated>2026-03-16T16:56:01Z</updated>\n    <link href='https://arxiv.org/abs/2603.15530v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.15530v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large language models operate in distinct compute-bound prefill followed by memory bandwidth-bound decode phases. Hybrid Mamba-Transformer models inherit this asymmetry while adding state space model (SSM) recurrences and element-wise operations that map poorly to matmul-centric accelerators. This mismatch causes performance bottlenecks, showing that a homogeneous architecture cannot satisfy all requirements. We introduce DUET, a disaggregated accelerator that assigns prefill and decode phases to specialized packages. The Prefill package utilizes systolic array chiplets with off-package memory for efficient large matrix multiplications and long-sequence SSMs. The Decode package utilizes vector-unit arrays with high-bandwidth in-package memory to accelerate token-by-token SSM and vector-matrix multiplications. Both architectures are runtime-configurable to support hybrid models with mixed Mamba and attention layers. Evaluations on Nemotron-H-56B, Zamba2-7B, and Llama3-8B across four workloads show that DUET achieves 4x faster time to first token, 1.4x higher throughput, and 1.5x lower time between tokens over the B200 GPU.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AR'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.DC'/>\n    <published>2026-03-16T16:56:01Z</published>\n    <arxiv:comment>Paper accepted for publication at the Design Automation Conference (DAC) 2026 conference</arxiv:comment>\n    <arxiv:primary_category term='cs.AR'/>\n    <author>\n      <name>Alish Kanani</name>\n    </author>\n    <author>\n      <name>Sangwan Lee</name>\n    </author>\n    <author>\n      <name>Han Lyu</name>\n    </author>\n    <author>\n      <name>Jiahao Lin</name>\n    </author>\n    <author>\n      <name>Jaehyun Park</name>\n    </author>\n    <author>\n      <name>Umit Y. Ogras</name>\n    </author>\n  </entry>"
}