Research

Paper

AI LLM March 04, 2026

Selecting Offline Reinforcement Learning Algorithms for Stochastic Network Control

Authors

Nicolas Helson, Pegah Alizadeh, Anastasios Giovanidis

Abstract

Offline Reinforcement Learning (RL) is a promising approach for next-generation wireless networks, where online exploration is unsafe and large amounts of operational data can be reused across the model lifecycle. However, the behavior of offline RL algorithms under genuinely stochastic dynamics -- inherent to wireless systems due to fading, noise, and traffic mobility -- remains insufficiently understood. We address this gap by evaluating Bellman-based (Conservative Q-Learning), sequence-based (Decision Transformers), and hybrid (Critic-Guided Decision Transformers) offline RL methods in an open-access stochastic telecom environment (mobile-env). Our results show that Conservative Q-Learning consistently produces more robust policies across different sources of stochasticity, making it a reliable default choice in lifecycle-driven AI management frameworks. Sequence-based methods remain competitive and can outperform Bellman-based approaches when sufficient high-return trajectories are available. These findings provide practical guidance for offline RL algorithm selection in AI-driven network control pipelines, such as O-RAN and future 6G functions, where robustness and data availability are key operational constraints.

Metadata

arXiv ID: 2603.03932
Provider: ARXIV
Primary Category: cs.NI
Published: 2026-03-04
Fetched: 2026-03-05 06:06

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.03932v1</id>\n    <title>Selecting Offline Reinforcement Learning Algorithms for Stochastic Network Control</title>\n    <updated>2026-03-04T10:41:10Z</updated>\n    <link href='https://arxiv.org/abs/2603.03932v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.03932v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Offline Reinforcement Learning (RL) is a promising approach for next-generation wireless networks, where online exploration is unsafe and large amounts of operational data can be reused across the model lifecycle. However, the behavior of offline RL algorithms under genuinely stochastic dynamics -- inherent to wireless systems due to fading, noise, and traffic mobility -- remains insufficiently understood. We address this gap by evaluating Bellman-based (Conservative Q-Learning), sequence-based (Decision Transformers), and hybrid (Critic-Guided Decision Transformers) offline RL methods in an open-access stochastic telecom environment (mobile-env). Our results show that Conservative Q-Learning consistently produces more robust policies across different sources of stochasticity, making it a reliable default choice in lifecycle-driven AI management frameworks. Sequence-based methods remain competitive and can outperform Bellman-based approaches when sufficient high-return trajectories are available. These findings provide practical guidance for offline RL algorithm selection in AI-driven network control pipelines, such as O-RAN and future 6G functions, where robustness and data availability are key operational constraints.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.NI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.PF'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='eess.SY'/>\n    <published>2026-03-04T10:41:10Z</published>\n    <arxiv:comment>Long version 12 pages, double column including Appendix. Short version accepted at NOMS2026-IPSN, Rome, Italy</arxiv:comment>\n    <arxiv:primary_category term='cs.NI'/>\n    <author>\n      <name>Nicolas Helson</name>\n    </author>\n    <author>\n      <name>Pegah Alizadeh</name>\n    </author>\n    <author>\n      <name>Anastasios Giovanidis</name>\n    </author>\n  </entry>"
}