Research

Paper

AI LLM March 02, 2026

Fast Entropy Decoding for Sparse MVM on GPUs

Authors

Emil Schätzle, Tommaso Pegolotti, Markus Püschel

Abstract

We present a novel, practical approach to speed up sparse matrix-vector multiplication (SpMVM) on GPUs. The novel key idea is to apply lossless entropy coding to further compress the sparse matrix when stored in one of the commonly supported formats. Our method is based on dtANS, our new lossless compression method that improves the entropy coding technique of asymmetric numeral systems (ANS) specifically for fast parallel GPU decoding when used in tandem with SpMVM. We apply dtANS on the widely used CSR format and present extensive benchmarks on the SuiteSparse collection of matrices against the state-of-the-art cuSPARSE library. On matrices with at least 2^(15) entries and at least 10 entries per row on average, our compression reduces the matrix size over the smallest cuSPARSE format (CSR, COO and SELL) in almost all cases and up to 11.77 times. Further, we achieve an SpMVM speedup for the majority of matrices with at least 2^(25) nonzero entries. The best speedup is 3.48x. We also show that we can improve over the AI-based multi-format AlphaSparse in an experiment that is limited due to its extreme computation overhead. We provide our code as an open source C++/CUDA header library, which includes both compression and multiplication kernels.

Metadata

arXiv ID: 2603.01915
Provider: ARXIV
Primary Category: cs.PF
Published: 2026-03-02
Fetched: 2026-03-03 04:34

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.01915v1</id>\n    <title>Fast Entropy Decoding for Sparse MVM on GPUs</title>\n    <updated>2026-03-02T14:28:48Z</updated>\n    <link href='https://arxiv.org/abs/2603.01915v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.01915v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>We present a novel, practical approach to speed up sparse matrix-vector multiplication (SpMVM) on GPUs. The novel key idea is to apply lossless entropy coding to further compress the sparse matrix when stored in one of the commonly supported formats. Our method is based on dtANS, our new lossless compression method that improves the entropy coding technique of asymmetric numeral systems (ANS) specifically for fast parallel GPU decoding when used in tandem with SpMVM. We apply dtANS on the widely used CSR format and present extensive benchmarks on the SuiteSparse collection of matrices against the state-of-the-art cuSPARSE library. On matrices with at least 2^(15) entries and at least 10 entries per row on average, our compression reduces the matrix size over the smallest cuSPARSE format (CSR, COO and SELL) in almost all cases and up to 11.77 times. Further, we achieve an SpMVM speedup for the majority of matrices with at least 2^(25) nonzero entries. The best speedup is 3.48x. We also show that we can improve over the AI-based multi-format AlphaSparse in an experiment that is limited due to its extreme computation overhead. We provide our code as an open source C++/CUDA header library, which includes both compression and multiplication kernels.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.PF'/>\n    <published>2026-03-02T14:28:48Z</published>\n    <arxiv:comment>To appear in 40th IEEE International Parallel &amp; Distributed Processing Symposium (IPDPS), 2026. Reproducibility Appendix available at https://doi.org/10.5281/zenodo.18694064</arxiv:comment>\n    <arxiv:primary_category term='cs.PF'/>\n    <author>\n      <name>Emil Schätzle</name>\n    </author>\n    <author>\n      <name>Tommaso Pegolotti</name>\n    </author>\n    <author>\n      <name>Markus Püschel</name>\n    </author>\n  </entry>"
}