Research

Paper

AI LLM February 19, 2026

Synergizing Transport-Based Generative Models and Latent Geometry for Stochastic Closure Modeling

Authors

Xinghao Dong, Huchen Yang, Jin-long Wu

Abstract

Diffusion models recently developed for generative AI tasks can produce high-quality samples while still maintaining diversity among samples to promote mode coverage, providing a promising path for learning stochastic closure models. Compared to other types of generative AI models, such as GANs and VAEs, the sampling speed is known as a key disadvantage of diffusion models. By systematically comparing transport-based generative models on a numerical example of 2D Kolmogorov flows, we show that flow matching in a lower-dimensional latent space is suited for fast sampling of stochastic closure models, enabling single-step sampling that is up to two orders of magnitude faster than iterative diffusion-based approaches. To control the latent space distortion and thus ensure the physical fidelity of the sampled closure term, we compare the implicit regularization offered by a joint training scheme against two explicit regularizers: metric-preserving (MP) and geometry-aware (GA) constraints. Besides offering a faster sampling speed, both explicitly and implicitly regularized latent spaces inherit the key topological information from the lower-dimensional manifold of the original complex dynamical system, which enables the learning of stochastic closure models without demanding a huge amount of training data.

Metadata

arXiv ID: 2602.17089
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-02-19
Fetched: 2026-02-21 18:51

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.17089v1</id>\n    <title>Synergizing Transport-Based Generative Models and Latent Geometry for Stochastic Closure Modeling</title>\n    <updated>2026-02-19T05:24:00Z</updated>\n    <link href='https://arxiv.org/abs/2602.17089v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.17089v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Diffusion models recently developed for generative AI tasks can produce high-quality samples while still maintaining diversity among samples to promote mode coverage, providing a promising path for learning stochastic closure models. Compared to other types of generative AI models, such as GANs and VAEs, the sampling speed is known as a key disadvantage of diffusion models. By systematically comparing transport-based generative models on a numerical example of 2D Kolmogorov flows, we show that flow matching in a lower-dimensional latent space is suited for fast sampling of stochastic closure models, enabling single-step sampling that is up to two orders of magnitude faster than iterative diffusion-based approaches. To control the latent space distortion and thus ensure the physical fidelity of the sampled closure term, we compare the implicit regularization offered by a joint training scheme against two explicit regularizers: metric-preserving (MP) and geometry-aware (GA) constraints. Besides offering a faster sampling speed, both explicitly and implicitly regularized latent spaces inherit the key topological information from the lower-dimensional manifold of the original complex dynamical system, which enables the learning of stochastic closure models without demanding a huge amount of training data.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='math.DS'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='physics.comp-ph'/>\n    <published>2026-02-19T05:24:00Z</published>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Xinghao Dong</name>\n    </author>\n    <author>\n      <name>Huchen Yang</name>\n    </author>\n    <author>\n      <name>Jin-long Wu</name>\n    </author>\n  </entry>"
}