Research

Paper

TESTING March 24, 2026

Portfolio Optimization under Recursive Utility via Reinforcement Learning

Authors

Minkey Chang

Abstract

We study whether a risk-sensitive objective from asset-pricing theory -- recursive utility -- improves reinforcement learning for portfolio allocation. The Bellman equation under recursive utility involves a certainty equivalent (CE) of future value that has no closed form under observed returns; we approximate it by $K$-sample Monte Carlo and train actor-critic (PPO, A2C) on the resulting value target and an approximate advantage estimate (AAE) that generalizes the Bellman residual to multi-step with state-dependent weights. This formulation applies only to critic-based algorithms. On 10 chronological train/test splits of South Korean ETF data, the recursive-utility agent improves on the discounted (naive) baseline in Sharpe ratio, max drawdown, and cumulative return. Derivations, world model and metrics, and full result tables are in the appendices.

Metadata

arXiv ID: 2603.22880
Provider: ARXIV
Primary Category: q-fin.GN
Published: 2026-03-24
Fetched: 2026-03-25 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.22880v1</id>\n    <title>Portfolio Optimization under Recursive Utility via Reinforcement Learning</title>\n    <updated>2026-03-24T07:25:30Z</updated>\n    <link href='https://arxiv.org/abs/2603.22880v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.22880v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>We study whether a risk-sensitive objective from asset-pricing theory -- recursive utility -- improves reinforcement learning for portfolio allocation. The Bellman equation under recursive utility involves a certainty equivalent (CE) of future value that has no closed form under observed returns; we approximate it by $K$-sample Monte Carlo and train actor-critic (PPO, A2C) on the resulting value target and an approximate advantage estimate (AAE) that generalizes the Bellman residual to multi-step with state-dependent weights. This formulation applies only to critic-based algorithms. On 10 chronological train/test splits of South Korean ETF data, the recursive-utility agent improves on the discounted (naive) baseline in Sharpe ratio, max drawdown, and cumulative return. Derivations, world model and metrics, and full result tables are in the appendices.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='q-fin.GN'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CE'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='q-fin.PM'/>\n    <published>2026-03-24T07:25:30Z</published>\n    <arxiv:primary_category term='q-fin.GN'/>\n    <author>\n      <name>Minkey Chang</name>\n    </author>\n  </entry>"
}