Research

Paper

TESTING March 03, 2026

Same Error, Different Function: The Optimizer as an Implicit Prior in Financial Time Series

Authors

Federico Vittorio Cortesi, Giuseppe Iannone, Giulia Crippa, Tomaso Poggio, Pierfrancesco Beneventano

Abstract

Neural networks applied to financial time series operate in a regime of underspecification, where model predictors achieve indistinguishable out-of-sample error. Using large-scale volatility forecasting for S$\&$P 500 stocks, we show that different model-training-pipeline pairs with identical test loss learn qualitatively different functions. Across architectures, predictive accuracy remains unchanged, yet optimizer choice reshapes non-linear response profiles and temporal dependence differently. These divergences have material consequences for decisions: volatility-ranked portfolios trace a near-vertical Sharpe-turnover frontier, with nearly $3\times$ turnover dispersion at comparable Sharpe ratios. We conclude that in underspecified settings, optimization acts as a consequential source of inductive bias, thus model evaluation should extend beyond scalar loss to encompass functional and decision-level implications.

Metadata

arXiv ID: 2603.02620
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-03
Fetched: 2026-03-04 03:41

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.02620v1</id>\n    <title>Same Error, Different Function: The Optimizer as an Implicit Prior in Financial Time Series</title>\n    <updated>2026-03-03T05:47:19Z</updated>\n    <link href='https://arxiv.org/abs/2603.02620v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.02620v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Neural networks applied to financial time series operate in a regime of underspecification, where model predictors achieve indistinguishable out-of-sample error. Using large-scale volatility forecasting for S$\\&amp;$P 500 stocks, we show that different model-training-pipeline pairs with identical test loss learn qualitatively different functions. Across architectures, predictive accuracy remains unchanged, yet optimizer choice reshapes non-linear response profiles and temporal dependence differently. These divergences have material consequences for decisions: volatility-ranked portfolios trace a near-vertical Sharpe-turnover frontier, with nearly $3\\times$ turnover dispersion at comparable Sharpe ratios. We conclude that in underspecified settings, optimization acts as a consequential source of inductive bias, thus model evaluation should extend beyond scalar loss to encompass functional and decision-level implications.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='q-fin.CP'/>\n    <published>2026-03-03T05:47:19Z</published>\n    <arxiv:comment>39 pages, 24 figures</arxiv:comment>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Federico Vittorio Cortesi</name>\n    </author>\n    <author>\n      <name>Giuseppe Iannone</name>\n    </author>\n    <author>\n      <name>Giulia Crippa</name>\n    </author>\n    <author>\n      <name>Tomaso Poggio</name>\n    </author>\n    <author>\n      <name>Pierfrancesco Beneventano</name>\n    </author>\n  </entry>"
}