Research

Paper

AI LLM March 06, 2026

ChatShopBuddy: Towards Reliable Conversational Shopping Agents via Reinforcement Learning

Authors

Yiruo Cheng, Kelong Mao, Tianhao Li, Jiejun Tan, Ji-Rong Wen, Zhicheng Dou

Abstract

Conversational shopping agents represent a critical consumer-facing application of Large Language Model (LLM)-powered agents, yet how to effectively apply post-training Reinforcement Learning (RL) to optimize such agents remains underexplored. This work investigates RL-based optimization for shopping agents in real-world scenarios, where agents must simultaneously satisfy multiple interdependent objectives spanning objective metrics (product correctness), subjective qualities (persuasiveness), outcome rewards (final response quality), and process rewards (tool efficiency). We present a complete methodology to address this challenge. Specifically, we first construct SmartShopBench, a benchmark that captures diverse shopping intents with a hierarchical evaluation that decomposes complex quality requirements into measurable levels. Building on this evaluation framework, we design Hierarchical Reward Modeling (HRM) to structure mixed reward types through conditional gating that reflects their logical dependencies. To enable efficient training, we further propose Dynamic Contrastive Policy Optimization (DCPO), which balances response quality with operational efficiency through dynamic trajectory selection based on reward and reasoning length. Extensive experiments demonstrate that our RL-trained agent, namely ChatShopBuddy, consistently outperforms larger models relying on generic reasoning, achieving superior stability rather than merely higher peaks. Our work provides valuable guidance for applying RL to real-world conversational agents.

Metadata

arXiv ID: 2603.06065
Provider: ARXIV
Primary Category: cs.IR
Published: 2026-03-06
Fetched: 2026-03-09 06:05

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.06065v1</id>\n    <title>ChatShopBuddy: Towards Reliable Conversational Shopping Agents via Reinforcement Learning</title>\n    <updated>2026-03-06T09:18:51Z</updated>\n    <link href='https://arxiv.org/abs/2603.06065v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.06065v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Conversational shopping agents represent a critical consumer-facing application of Large Language Model (LLM)-powered agents, yet how to effectively apply post-training Reinforcement Learning (RL) to optimize such agents remains underexplored. This work investigates RL-based optimization for shopping agents in real-world scenarios, where agents must simultaneously satisfy multiple interdependent objectives spanning objective metrics (product correctness), subjective qualities (persuasiveness), outcome rewards (final response quality), and process rewards (tool efficiency). We present a complete methodology to address this challenge. Specifically, we first construct SmartShopBench, a benchmark that captures diverse shopping intents with a hierarchical evaluation that decomposes complex quality requirements into measurable levels. Building on this evaluation framework, we design Hierarchical Reward Modeling (HRM) to structure mixed reward types through conditional gating that reflects their logical dependencies. To enable efficient training, we further propose Dynamic Contrastive Policy Optimization (DCPO), which balances response quality with operational efficiency through dynamic trajectory selection based on reward and reasoning length. Extensive experiments demonstrate that our RL-trained agent, namely ChatShopBuddy, consistently outperforms larger models relying on generic reasoning, achieving superior stability rather than merely higher peaks. Our work provides valuable guidance for applying RL to real-world conversational agents.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.IR'/>\n    <published>2026-03-06T09:18:51Z</published>\n    <arxiv:primary_category term='cs.IR'/>\n    <author>\n      <name>Yiruo Cheng</name>\n    </author>\n    <author>\n      <name>Kelong Mao</name>\n    </author>\n    <author>\n      <name>Tianhao Li</name>\n    </author>\n    <author>\n      <name>Jiejun Tan</name>\n    </author>\n    <author>\n      <name>Ji-Rong Wen</name>\n    </author>\n    <author>\n      <name>Zhicheng Dou</name>\n    </author>\n  </entry>"
}