Paper
Ouroboros: Wafer-Scale SRAM CIM with Token-Grained Pipelining for Large Language Model Inference
Authors
Yiqi Liu, Yudong Pan, Mengdi Wang, Shixin Zhao, Haonan Zhu, Yinhe Han, Lei Zhang, Ying Wang
Abstract
Conventional LLM inference architectures suffer from high energy and latency due to frequent data movement across memory hierarchies. We propose Ouroboros, a wafer-scale SRAM-based Computing-in-Memory (CIM) architecture that executes all operations in situ, eliminating off-chip migration. To maximize its limited first-level capacity, we introduce three innovations: Token-Grained Pipelining: Replaces sequence-level pipelining to mitigate length variations, boosting utilization and reducing activation storage. Distributed Dynamic KV Cache Management: Decouples memory from compute to leverage fragmented SRAM for efficient KV storage. Communication-Aware Mapping: Optimizes core allocation for locality and fault tolerance across the wafer. Experimental results show Ouroboros achieves average gains of $4.1\times$ in throughput and $4.2\times$ in energy efficiency, peaking at $9.1\times$ and $17\times$ for the 13B model. (*Due to the notification of arXiv "The Abstract field cannot be longer than 1,920 characters", the appeared Abstract is shortened. For the full Abstract, please download the Article.)
Metadata
Related papers
Vibe Coding XR: Accelerating AI + XR Prototyping with XR Blocks and Gemini
Ruofei Du, Benjamin Hersh, David Li, Nels Numan, Xun Qian, Yanhe Chen, Zhongy... • 2026-03-25
Comparing Developer and LLM Biases in Code Evaluation
Aditya Mittal, Ryan Shar, Zichu Wu, Shyam Agarwal, Tongshuang Wu, Chris Donah... • 2026-03-25
The Stochastic Gap: A Markovian Framework for Pre-Deployment Reliability and Oversight-Cost Auditing in Agentic Artificial Intelligence
Biplab Pal, Santanu Bhattacharya • 2026-03-25
Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA
Saahil Mathur, Ryan David Rittner, Vedant Ajit Thakur, Daniel Stuart Schiff, ... • 2026-03-25
MARCH: Multi-Agent Reinforced Self-Check for LLM Hallucination
Zhuo Li, Yupeng Zhang, Pengyu Cheng, Jiajun Song, Mengyu Zhou, Hao Li, Shujie... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.02737v1</id>\n <title>Ouroboros: Wafer-Scale SRAM CIM with Token-Grained Pipelining for Large Language Model Inference</title>\n <updated>2026-03-03T08:41:33Z</updated>\n <link href='https://arxiv.org/abs/2603.02737v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.02737v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>Conventional LLM inference architectures suffer from high energy and latency due to frequent data movement across memory hierarchies. We propose Ouroboros, a wafer-scale SRAM-based Computing-in-Memory (CIM) architecture that executes all operations in situ, eliminating off-chip migration. To maximize its limited first-level capacity, we introduce three innovations:\n Token-Grained Pipelining: Replaces sequence-level pipelining to mitigate length variations, boosting utilization and reducing activation storage. Distributed Dynamic KV Cache Management: Decouples memory from compute to leverage fragmented SRAM for efficient KV storage. Communication-Aware Mapping: Optimizes core allocation for locality and fault tolerance across the wafer.\n Experimental results show Ouroboros achieves average gains of $4.1\\times$ in throughput and $4.2\\times$ in energy efficiency, peaking at $9.1\\times$ and $17\\times$ for the 13B model.\n (*Due to the notification of arXiv \"The Abstract field cannot be longer than 1,920 characters\", the appeared Abstract is shortened. For the full Abstract, please download the Article.)</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.AR'/>\n <published>2026-03-03T08:41:33Z</published>\n <arxiv:comment>17 pages, 21 figures, ASPLOS 2026</arxiv:comment>\n <arxiv:primary_category term='cs.AR'/>\n <author>\n <name>Yiqi Liu</name>\n </author>\n <author>\n <name>Yudong Pan</name>\n </author>\n <author>\n <name>Mengdi Wang</name>\n </author>\n <author>\n <name>Shixin Zhao</name>\n </author>\n <author>\n <name>Haonan Zhu</name>\n </author>\n <author>\n <name>Yinhe Han</name>\n </author>\n <author>\n <name>Lei Zhang</name>\n </author>\n <author>\n <name>Ying Wang</name>\n </author>\n </entry>"
}