Research

Paper

AI LLM March 11, 2026

Adaptive RAN Slicing Control via Reward-Free Self-Finetuning Agents

Authors

Yuanhao Li, Haozhe Wang, Geyong Min, Nektarios Georgalas, Wang Miao

Abstract

The integration of Generative AI models into AI-native network systems offers a transformative path toward achieving autonomous and adaptive control. However, the application of such models to continuous control tasks is impeded by intrinsic architectural limitations, including finite context windows, the lack of explicit reward signals, and the degradation of the long context. This paper posits that the key to unlocking robust continuous control is enabling agents to internalize experience by distilling it into their parameters, rather than relying on prompt-based memory. To this end, we propose a novel self-finetuning framework that enables agentic systems to learn continuously through direct interaction with the environment, bypassing the need for handcrafted rewards. Our framework implements a bi-perspective reflection mechanism that generates autonomous linguistic feedback to construct preference datasets from interaction history. A subsequent preference-based fine-tuning process distills long-horizon experiences into the model's parameters. We evaluate our approach on a dynamic Radio Access Network (RAN) slicing task, a challenging multi-objective control problem that requires the resolution of acute trade-offs between spectrum efficiency, service quality, and reconfiguration stability under volatile network conditions. Experimental results show that our framework outperforms standard Reinforcement Learning (RL) baselines and existing Large Language Model (LLM)-based agents in sample efficiency, stability, and multi-metric optimization. These findings demonstrate the potential of self-improving generative agents for continuous control tasks, paving the way for future AI-native network infrastructure.

Metadata

arXiv ID: 2603.10564
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-03-11
Fetched: 2026-03-12 04:21

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.10564v1</id>\n    <title>Adaptive RAN Slicing Control via Reward-Free Self-Finetuning Agents</title>\n    <updated>2026-03-11T09:14:56Z</updated>\n    <link href='https://arxiv.org/abs/2603.10564v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.10564v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>The integration of Generative AI models into AI-native network systems offers a transformative path toward achieving autonomous and adaptive control. However, the application of such models to continuous control tasks is impeded by intrinsic architectural limitations, including finite context windows, the lack of explicit reward signals, and the degradation of the long context. This paper posits that the key to unlocking robust continuous control is enabling agents to internalize experience by distilling it into their parameters, rather than relying on prompt-based memory. To this end, we propose a novel self-finetuning framework that enables agentic systems to learn continuously through direct interaction with the environment, bypassing the need for handcrafted rewards. Our framework implements a bi-perspective reflection mechanism that generates autonomous linguistic feedback to construct preference datasets from interaction history. A subsequent preference-based fine-tuning process distills long-horizon experiences into the model's parameters. We evaluate our approach on a dynamic Radio Access Network (RAN) slicing task, a challenging multi-objective control problem that requires the resolution of acute trade-offs between spectrum efficiency, service quality, and reconfiguration stability under volatile network conditions. Experimental results show that our framework outperforms standard Reinforcement Learning (RL) baselines and existing Large Language Model (LLM)-based agents in sample efficiency, stability, and multi-metric optimization. These findings demonstrate the potential of self-improving generative agents for continuous control tasks, paving the way for future AI-native network infrastructure.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.NI'/>\n    <published>2026-03-11T09:14:56Z</published>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Yuanhao Li</name>\n    </author>\n    <author>\n      <name>Haozhe Wang</name>\n    </author>\n    <author>\n      <name>Geyong Min</name>\n    </author>\n    <author>\n      <name>Nektarios Georgalas</name>\n    </author>\n    <author>\n      <name>Wang Miao</name>\n    </author>\n  </entry>"
}