Paper
Uncertainty-Aware Calculation of Analytical Gradients of Matrix-Interpolatory Reduced-Order Models for Efficient Structural Optimization
Authors
Marcel Warzecha, Sebastian Resch-Schopper, Gerhard Müller
Abstract
This paper presents an adaptive sampling algorithm tailored for the optimization of parametrized dynamical systems using projection-based model order reduction. Unlike classical sampling strategies, this framework does not aim for a small approximation error in the global sense but focuses on identifying and refining promising regions early on while reducing expensive full order model evaluations. The algorithm is tested on two models: a Timoshenko beam and a Kelvin cell, which ought to be optimized in terms of the system output in the frequency domain. For that, different norms of the transfer function are used as the objective function, while up to two geometrical parameters form the vector of design variables. The sampled full order models are reduced using the iterative rational Krylov algorithm and reprojected into a global basis. Subsequently, the models are parametrized by performing sparse Bayesian regression on matrix entry level of the reduced operators. Thompson sampling is carried out using the posterior distribution of the polynomial coefficients in order to account for uncertainties in the trained regression models. The strategy deployed for sample acquisition incorporates a gradient-based search on the parametrized reduced order model, which involves analytical gradients obtained via adjoint sensitivity analysis. By adding the found optimum to the sample set, the sample set is iteratively refined. Results demonstrate robust convergence towards the global optimum but highlight the computational cost introduced by the gradient-based optimization. The probabilistic extensions seamlessly integrate into existing matrix-interpolatory reduction frameworks and enable the analytical calculation of gradients under uncertainty.
Metadata
Related papers
Fractal universe and quantum gravity made simple
Fabio Briscese, Gianluca Calcagni • 2026-03-25
POLY-SIM: Polyglot Speaker Identification with Missing Modality Grand Challenge 2026 Evaluation Plan
Marta Moscati, Muhammad Saad Saeed, Marina Zanoni, Mubashir Noman, Rohan Kuma... • 2026-03-25
LensWalk: Agentic Video Understanding by Planning How You See in Videos
Keliang Li, Yansong Li, Hongze Shen, Mengdi Liu, Hong Chang, Shiguang Shan • 2026-03-25
Orientation Reconstruction of Proteins using Coulomb Explosions
Tomas André, Alfredo Bellisario, Nicusor Timneanu, Carl Caleman • 2026-03-25
The role of spatial context and multitask learning in the detection of organic and conventional farming systems based on Sentinel-2 time series
Jan Hemmerling, Marcel Schwieder, Philippe Rufin, Leon-Friedrich Thomas, Mire... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2602.23314v1</id>\n <title>Uncertainty-Aware Calculation of Analytical Gradients of Matrix-Interpolatory Reduced-Order Models for Efficient Structural Optimization</title>\n <updated>2026-02-26T18:21:59Z</updated>\n <link href='https://arxiv.org/abs/2602.23314v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2602.23314v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>This paper presents an adaptive sampling algorithm tailored for the optimization of parametrized dynamical systems using projection-based model order reduction. Unlike classical sampling strategies, this framework does not aim for a small approximation error in the global sense but focuses on identifying and refining promising regions early on while reducing expensive full order model evaluations. The algorithm is tested on two models: a Timoshenko beam and a Kelvin cell, which ought to be optimized in terms of the system output in the frequency domain. For that, different norms of the transfer function are used as the objective function, while up to two geometrical parameters form the vector of design variables. The sampled full order models are reduced using the iterative rational Krylov algorithm and reprojected into a global basis. Subsequently, the models are parametrized by performing sparse Bayesian regression on matrix entry level of the reduced operators. Thompson sampling is carried out using the posterior distribution of the polynomial coefficients in order to account for uncertainties in the trained regression models. The strategy deployed for sample acquisition incorporates a gradient-based search on the parametrized reduced order model, which involves analytical gradients obtained via adjoint sensitivity analysis. By adding the found optimum to the sample set, the sample set is iteratively refined. Results demonstrate robust convergence towards the global optimum but highlight the computational cost introduced by the gradient-based optimization. The probabilistic extensions seamlessly integrate into existing matrix-interpolatory reduction frameworks and enable the analytical calculation of gradients under uncertainty.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.CE'/>\n <published>2026-02-26T18:21:59Z</published>\n <arxiv:comment>25 pages, 10 figures</arxiv:comment>\n <arxiv:primary_category term='cs.CE'/>\n <author>\n <name>Marcel Warzecha</name>\n </author>\n <author>\n <name>Sebastian Resch-Schopper</name>\n </author>\n <author>\n <name>Gerhard Müller</name>\n </author>\n </entry>"
}