Research

Paper

AI LLM February 26, 2026

PRAC: Principal-Random Subspace for LLM Activation Compression and Memory-Efficient Training

Authors

Yanyi Li, Yimu Zhang, Cong Fang

Abstract

Activations have become the primary memory bottleneck in large-batch LLM training. However, existing compression methods fail to exploit the spectral structure of activations, resulting in slow convergence or limited compression. To address this, we bridge the relationship between the algorithm's fast convergence and the requirements for subspace projection, and show that an effective compression should yield an unbiased estimate of the original activation with low variance. We propose Principal-Random Subspace for LLM Activation Compression (PRAC), which novelly decomposes activations into two components: a principal subspace captured via SVD to retain dominant information, and a random subspace sampled from the orthogonal complement to approximate the tail. By introducing a precise scaling factor, we prove that PRAC yields an unbiased gradient estimator with minimum variance under certain conditions. Extensive experiments on pre-training and fine-tuning tasks demonstrate that PRAC achieves up to 36% total memory reduction with negligible performance degradation and minimal computational cost.

Metadata

arXiv ID: 2602.23111
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-02-26
Fetched: 2026-02-27 04:35

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.23111v1</id>\n    <title>PRAC: Principal-Random Subspace for LLM Activation Compression and Memory-Efficient Training</title>\n    <updated>2026-02-26T15:23:34Z</updated>\n    <link href='https://arxiv.org/abs/2602.23111v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.23111v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Activations have become the primary memory bottleneck in large-batch LLM training. However, existing compression methods fail to exploit the spectral structure of activations, resulting in slow convergence or limited compression. To address this, we bridge the relationship between the algorithm's fast convergence and the requirements for subspace projection, and show that an effective compression should yield an unbiased estimate of the original activation with low variance. We propose Principal-Random Subspace for LLM Activation Compression (PRAC), which novelly decomposes activations into two components: a principal subspace captured via SVD to retain dominant information, and a random subspace sampled from the orthogonal complement to approximate the tail. By introducing a precise scaling factor, we prove that PRAC yields an unbiased gradient estimator with minimum variance under certain conditions. Extensive experiments on pre-training and fine-tuning tasks demonstrate that PRAC achieves up to 36% total memory reduction with negligible performance degradation and minimal computational cost.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-02-26T15:23:34Z</published>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Yanyi Li</name>\n    </author>\n    <author>\n      <name>Yimu Zhang</name>\n    </author>\n    <author>\n      <name>Cong Fang</name>\n    </author>\n  </entry>"
}