Paper
Pruning for efficient deterministic global optimization over trained ReLU neural networks
Authors
Giacomo Lastrucci, Tanuj Karia, Victor Schulte, Dominik Bongartz, Artur M. Schweidtmann
Abstract
Neural networks are increasingly used as surrogates in optimization problems to replace computationally expensive models. However, embedding ReLU neural networks in mathematical programs introduces significant computational challenges, particularly for deep and wide networks, due to both the formulation of the ReLU disjunction and the resulting large-scale optimization problem. This work investigates how pruning techniques can accelerate the solution of optimization problems with embedded neural networks, focusing on the mechanisms underlying the computational gains. We provide theoretical insights into how both unstructured (weight) and structured (node) pruning affect the ReLU big-M formulation, showing that pruning monotonically tightens preactivation bounds. We conduct comprehensive empirical studies across multiple network architectures using an illustrative test function and a realistic chemical process flowsheet optimization case study. Our results show that pruning achieves speedups of up to three to four orders of magnitude, with computational gains attributed to three key factors: (i) reduction in problem size, (ii) decrease in the number of integer variables, and (iii) tightening of big-M bounds. Weight pruning is particularly effective for deep, narrow networks, while node pruning performs better for shallow, wide or medium-sized networks. In the chemical engineering case study, pruning enabled convergence within seconds for problems that were otherwise intractable. We recommend adopting pruning as standard practice when developing neural network surrogates for optimization, especially for engineering applications requiring repeated optimization solves.
Metadata
Related papers
Fractal universe and quantum gravity made simple
Fabio Briscese, Gianluca Calcagni • 2026-03-25
POLY-SIM: Polyglot Speaker Identification with Missing Modality Grand Challenge 2026 Evaluation Plan
Marta Moscati, Muhammad Saad Saeed, Marina Zanoni, Mubashir Noman, Rohan Kuma... • 2026-03-25
LensWalk: Agentic Video Understanding by Planning How You See in Videos
Keliang Li, Yansong Li, Hongze Shen, Mengdi Liu, Hong Chang, Shiguang Shan • 2026-03-25
Orientation Reconstruction of Proteins using Coulomb Explosions
Tomas André, Alfredo Bellisario, Nicusor Timneanu, Carl Caleman • 2026-03-25
The role of spatial context and multitask learning in the detection of organic and conventional farming systems based on Sentinel-2 time series
Jan Hemmerling, Marcel Schwieder, Philippe Rufin, Leon-Friedrich Thomas, Mire... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2603.23299v1</id>\n <title>Pruning for efficient deterministic global optimization over trained ReLU neural networks</title>\n <updated>2026-03-24T15:01:56Z</updated>\n <link href='https://arxiv.org/abs/2603.23299v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2603.23299v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>Neural networks are increasingly used as surrogates in optimization problems to replace computationally expensive models. However, embedding ReLU neural networks in mathematical programs introduces significant computational challenges, particularly for deep and wide networks, due to both the formulation of the ReLU disjunction and the resulting large-scale optimization problem. This work investigates how pruning techniques can accelerate the solution of optimization problems with embedded neural networks, focusing on the mechanisms underlying the computational gains. We provide theoretical insights into how both unstructured (weight) and structured (node) pruning affect the ReLU big-M formulation, showing that pruning monotonically tightens preactivation bounds. We conduct comprehensive empirical studies across multiple network architectures using an illustrative test function and a realistic chemical process flowsheet optimization case study. Our results show that pruning achieves speedups of up to three to four orders of magnitude, with computational gains attributed to three key factors: (i) reduction in problem size, (ii) decrease in the number of integer variables, and (iii) tightening of big-M bounds. Weight pruning is particularly effective for deep, narrow networks, while node pruning performs better for shallow, wide or medium-sized networks. In the chemical engineering case study, pruning enabled convergence within seconds for problems that were otherwise intractable. We recommend adopting pruning as standard practice when developing neural network surrogates for optimization, especially for engineering applications requiring repeated optimization solves.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='math.OC'/>\n <published>2026-03-24T15:01:56Z</published>\n <arxiv:primary_category term='math.OC'/>\n <author>\n <name>Giacomo Lastrucci</name>\n </author>\n <author>\n <name>Tanuj Karia</name>\n </author>\n <author>\n <name>Victor Schulte</name>\n </author>\n <author>\n <name>Dominik Bongartz</name>\n </author>\n <author>\n <name>Artur M. Schweidtmann</name>\n </author>\n </entry>"
}