Research

Paper

AI LLM March 09, 2026

Deterministic Differentiable Structured Pruning for Large Language Models

Authors

Weiyu Huang, Pengle Zhang, Xiaolu Zhang, Jun Zhou, Jun Zhu, Jianfei Chen

Abstract

Structured pruning reduces LLM inference cost by removing low-importance architectural components. This can be viewed as learning a multiplicative gate for each component under an l0 sparsity constraint. Due to the discreteness of the l0 norm, prior work typically adopts stochastic hard-concrete relaxations to enable differentiable optimization; however, this stochasticity can introduce a train--test mismatch when sampled masks are discretized for deployment and restricts masks to a bounded, near-binary range. To address this, we propose Deterministic Differentiable Pruning (DDP), a mask-only optimization method that eliminates stochasticity by directly optimizing a deterministic soft surrogate of the discrete l0 objective. Compared with prior approaches, DDP offers greater expressiveness, reduced train--test mismatch, and faster convergence. We apply our method to several dense and MoE models, including Qwen3-32B and Qwen3-30B-A3B, achieving a performance loss as small as 1% on downstream tasks while outperforming previous methods at 20% sparsity. We further demonstrate end-to-end inference speedups in realistic deployment settings with vLLM.

Metadata

arXiv ID: 2603.08065
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-09
Fetched: 2026-03-10 05:43

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.08065v1</id>\n    <title>Deterministic Differentiable Structured Pruning for Large Language Models</title>\n    <updated>2026-03-09T07:59:17Z</updated>\n    <link href='https://arxiv.org/abs/2603.08065v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.08065v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Structured pruning reduces LLM inference cost by removing low-importance architectural components. This can be viewed as learning a multiplicative gate for each component under an l0 sparsity constraint. Due to the discreteness of the l0 norm, prior work typically adopts stochastic hard-concrete relaxations to enable differentiable optimization; however, this stochasticity can introduce a train--test mismatch when sampled masks are discretized for deployment and restricts masks to a bounded, near-binary range. To address this, we propose Deterministic Differentiable Pruning (DDP), a mask-only optimization method that eliminates stochasticity by directly optimizing a deterministic soft surrogate of the discrete l0 objective. Compared with prior approaches, DDP offers greater expressiveness, reduced train--test mismatch, and faster convergence. We apply our method to several dense and MoE models, including Qwen3-32B and Qwen3-30B-A3B, achieving a performance loss as small as 1% on downstream tasks while outperforming previous methods at 20% sparsity. We further demonstrate end-to-end inference speedups in realistic deployment settings with vLLM.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-09T07:59:17Z</published>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Weiyu Huang</name>\n    </author>\n    <author>\n      <name>Pengle Zhang</name>\n    </author>\n    <author>\n      <name>Xiaolu Zhang</name>\n    </author>\n    <author>\n      <name>Jun Zhou</name>\n    </author>\n    <author>\n      <name>Jun Zhu</name>\n    </author>\n    <author>\n      <name>Jianfei Chen</name>\n    </author>\n  </entry>"
}