Research

Paper

AI LLM February 24, 2026

Balancing Multiple Objectives in Urban Traffic Control with Reinforcement Learning from AI Feedback

Authors

Chenyang Zhao, Vinny Cahill, Ivana Dusparic

Abstract

Reward design has been one of the central challenges for real world reinforcement learning (RL) deployment, especially in settings with multiple objectives. Preference-based RL offers an appealing alternative by learning from human preferences over pairs of behavioural outcomes. More recently, RL from AI feedback (RLAIF) has demonstrated that large language models (LLMs) can generate preference labels at scale, mitigating the reliance on human annotators. However, existing RLAIF work typically focuses only on single-objective tasks, leaving the open question of how RLAIF handles systems that involve multiple objectives. In such systems trade-offs among conflicting objectives are difficult to specify, and policies risk collapsing into optimizing for a dominant goal. In this paper, we explore the extension of the RLAIF paradigm to multi-objective self-adaptive systems. We show that multi-objective RLAIF can produce policies that yield balanced trade-offs reflecting different user priorities without laborious reward engineering. We argue that integrating RLAIF into multi-objective RL offers a scalable path toward user-aligned policy learning in domains with inherently conflicting objectives.

Metadata

arXiv ID: 2602.20728
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-02-24
Fetched: 2026-02-25 06:05

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.20728v1</id>\n    <title>Balancing Multiple Objectives in Urban Traffic Control with Reinforcement Learning from AI Feedback</title>\n    <updated>2026-02-24T09:47:25Z</updated>\n    <link href='https://arxiv.org/abs/2602.20728v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.20728v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Reward design has been one of the central challenges for real world reinforcement learning (RL) deployment, especially in settings with multiple objectives. Preference-based RL offers an appealing alternative by learning from human preferences over pairs of behavioural outcomes. More recently, RL from AI feedback (RLAIF) has demonstrated that large language models (LLMs) can generate preference labels at scale, mitigating the reliance on human annotators. However, existing RLAIF work typically focuses only on single-objective tasks, leaving the open question of how RLAIF handles systems that involve multiple objectives. In such systems trade-offs among conflicting objectives are difficult to specify, and policies risk collapsing into optimizing for a dominant goal. In this paper, we explore the extension of the RLAIF paradigm to multi-objective self-adaptive systems. We show that multi-objective RLAIF can produce policies that yield balanced trade-offs reflecting different user priorities without laborious reward engineering. We argue that integrating RLAIF into multi-objective RL offers a scalable path toward user-aligned policy learning in domains with inherently conflicting objectives.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-02-24T09:47:25Z</published>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Chenyang Zhao</name>\n    </author>\n    <author>\n      <name>Vinny Cahill</name>\n    </author>\n    <author>\n      <name>Ivana Dusparic</name>\n    </author>\n  </entry>"
}