Research

Paper

TESTING March 18, 2026

Stochastic set-valued optimization and its application to robust learning

Authors

Tommaso Giovannelli, Jingfu Tan, Luis Nunes Vicente

Abstract

In this paper, we develop a stochastic set-valued optimization (SVO) framework tailored for robust machine learning. In the SVO setting, each decision variable is mapped to a set of objective values, and optimality is defined via set relations. We focus on SVO problems with hyperbox sets, which can be reformulated as multi-objective optimization (MOO) problems with finitely many objectives and serve as a foundation for representing or approximating more general mapped sets. Two special cases of hyperbox-valued optimization (HVO) are interval-valued (IVO) and rectangle-valued (RVO) optimization. We construct stochastic IVO/RVO formulations that incorporate subquantiles and superquantiles into the objective functions of the MOO reformulations, providing a new characterization for subquantiles. These formulations provide interpretable trade-offs by capturing both lower- and upper-tail behaviors of loss distributions, thereby going beyond standard empirical risk minimization and classical robust models. To solve the resulting multi-objective problems, we adopt stochastic multi-gradient algorithms and select a Pareto knee solution. In numerical experiments, the proposed algorithms with this selection strategy exhibit improved robustness and reduced variability across test replications under distributional shift compared with empirical risk minimization, while maintaining competitive accuracy.

Metadata

arXiv ID: 2603.17691
Provider: ARXIV
Primary Category: math.OC
Published: 2026-03-18
Fetched: 2026-03-19 06:01

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.17691v1</id>\n    <title>Stochastic set-valued optimization and its application to robust learning</title>\n    <updated>2026-03-18T13:07:53Z</updated>\n    <link href='https://arxiv.org/abs/2603.17691v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.17691v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>In this paper, we develop a stochastic set-valued optimization (SVO) framework tailored for robust machine learning. In the SVO setting, each decision variable is mapped to a set of objective values, and optimality is defined via set relations. We focus on SVO problems with hyperbox sets, which can be reformulated as multi-objective optimization (MOO) problems with finitely many objectives and serve as a foundation for representing or approximating more general mapped sets. Two special cases of hyperbox-valued optimization (HVO) are interval-valued (IVO) and rectangle-valued (RVO) optimization. We construct stochastic IVO/RVO formulations that incorporate subquantiles and superquantiles into the objective functions of the MOO reformulations, providing a new characterization for subquantiles. These formulations provide interpretable trade-offs by capturing both lower- and upper-tail behaviors of loss distributions, thereby going beyond standard empirical risk minimization and classical robust models. To solve the resulting multi-objective problems, we adopt stochastic multi-gradient algorithms and select a Pareto knee solution. In numerical experiments, the proposed algorithms with this selection strategy exhibit improved robustness and reduced variability across test replications under distributional shift compared with empirical risk minimization, while maintaining competitive accuracy.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='math.OC'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-03-18T13:07:53Z</published>\n    <arxiv:primary_category term='math.OC'/>\n    <author>\n      <name>Tommaso Giovannelli</name>\n    </author>\n    <author>\n      <name>Jingfu Tan</name>\n    </author>\n    <author>\n      <name>Luis Nunes Vicente</name>\n    </author>\n  </entry>"
}