Research

Paper

TESTING February 26, 2026

Uncertainty-Aware Calculation of Analytical Gradients of Matrix-Interpolatory Reduced-Order Models for Efficient Structural Optimization

Authors

Marcel Warzecha, Sebastian Resch-Schopper, Gerhard Müller

Abstract

This paper presents an adaptive sampling algorithm tailored for the optimization of parametrized dynamical systems using projection-based model order reduction. Unlike classical sampling strategies, this framework does not aim for a small approximation error in the global sense but focuses on identifying and refining promising regions early on while reducing expensive full order model evaluations. The algorithm is tested on two models: a Timoshenko beam and a Kelvin cell, which ought to be optimized in terms of the system output in the frequency domain. For that, different norms of the transfer function are used as the objective function, while up to two geometrical parameters form the vector of design variables. The sampled full order models are reduced using the iterative rational Krylov algorithm and reprojected into a global basis. Subsequently, the models are parametrized by performing sparse Bayesian regression on matrix entry level of the reduced operators. Thompson sampling is carried out using the posterior distribution of the polynomial coefficients in order to account for uncertainties in the trained regression models. The strategy deployed for sample acquisition incorporates a gradient-based search on the parametrized reduced order model, which involves analytical gradients obtained via adjoint sensitivity analysis. By adding the found optimum to the sample set, the sample set is iteratively refined. Results demonstrate robust convergence towards the global optimum but highlight the computational cost introduced by the gradient-based optimization. The probabilistic extensions seamlessly integrate into existing matrix-interpolatory reduction frameworks and enable the analytical calculation of gradients under uncertainty.

Metadata

arXiv ID: 2602.23314
Provider: ARXIV
Primary Category: cs.CE
Published: 2026-02-26
Fetched: 2026-02-27 04:35

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.23314v1</id>\n    <title>Uncertainty-Aware Calculation of Analytical Gradients of Matrix-Interpolatory Reduced-Order Models for Efficient Structural Optimization</title>\n    <updated>2026-02-26T18:21:59Z</updated>\n    <link href='https://arxiv.org/abs/2602.23314v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.23314v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>This paper presents an adaptive sampling algorithm tailored for the optimization of parametrized dynamical systems using projection-based model order reduction. Unlike classical sampling strategies, this framework does not aim for a small approximation error in the global sense but focuses on identifying and refining promising regions early on while reducing expensive full order model evaluations. The algorithm is tested on two models: a Timoshenko beam and a Kelvin cell, which ought to be optimized in terms of the system output in the frequency domain. For that, different norms of the transfer function are used as the objective function, while up to two geometrical parameters form the vector of design variables. The sampled full order models are reduced using the iterative rational Krylov algorithm and reprojected into a global basis. Subsequently, the models are parametrized by performing sparse Bayesian regression on matrix entry level of the reduced operators. Thompson sampling is carried out using the posterior distribution of the polynomial coefficients in order to account for uncertainties in the trained regression models. The strategy deployed for sample acquisition incorporates a gradient-based search on the parametrized reduced order model, which involves analytical gradients obtained via adjoint sensitivity analysis. By adding the found optimum to the sample set, the sample set is iteratively refined. Results demonstrate robust convergence towards the global optimum but highlight the computational cost introduced by the gradient-based optimization. The probabilistic extensions seamlessly integrate into existing matrix-interpolatory reduction frameworks and enable the analytical calculation of gradients under uncertainty.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CE'/>\n    <published>2026-02-26T18:21:59Z</published>\n    <arxiv:comment>25 pages, 10 figures</arxiv:comment>\n    <arxiv:primary_category term='cs.CE'/>\n    <author>\n      <name>Marcel Warzecha</name>\n    </author>\n    <author>\n      <name>Sebastian Resch-Schopper</name>\n    </author>\n    <author>\n      <name>Gerhard Müller</name>\n    </author>\n  </entry>"
}