Research

Paper

AI LLM March 12, 2026

PRMB: Benchmarking Reward Models in Long-Horizon CBT-based Counseling Dialogue

Authors

Yougen Zhou, Qin Chen, Ningning Zhou, Jie Zhou, Liang He

Abstract

Large language models (LLMs) hold potential for mental healthcare applications, particularly in cognitive behavioral therapy (CBT)-based counseling, where reward models play a critical role in aligning LLMs with preferred therapeutic behaviors. However, existing reward model evaluations often fail to capture alignment effectiveness in long-horizon interventions due to limited coverage of process-oriented datasets and misalignment between evaluation targets and psychological alignment objectives. To address these limitations, we present PRMB, a comprehensive benchmark tailored for evaluating reward models in multi-session CBT counseling. PRMB spans 6 sessions and 21 diverse negative scenarios, incorporating both pairwise and Best-of-N preference evaluations. We demonstrate a positive correlation between our benchmark and downstream counseling dialogue performance. Based on our benchmark, we conduct extensive analysis on the state-of-the-art reward models, revealing their generalization defects that were not discovered by previous benchmarks and highlighting the potential of generative reward models. Furthermore, we delve into examining the effectiveness of inference-time strategy for the evaluation of reward models and analyzing the impact factors of generative reward models. This work advances intelligent informatics for personalized healthcare by establishing a framework for reward model assessment in mental health dialogues. Evaluation code and datasets are publicly available at https://github.com/YouKenChaw/PRMB

Metadata

arXiv ID: 2603.11494
Provider: ARXIV
Primary Category: cs.DB
Published: 2026-03-12
Fetched: 2026-03-14 05:03

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.11494v1</id>\n    <title>PRMB: Benchmarking Reward Models in Long-Horizon CBT-based Counseling Dialogue</title>\n    <updated>2026-03-12T03:26:29Z</updated>\n    <link href='https://arxiv.org/abs/2603.11494v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.11494v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large language models (LLMs) hold potential for mental healthcare applications, particularly in cognitive behavioral therapy (CBT)-based counseling, where reward models play a critical role in aligning LLMs with preferred therapeutic behaviors. However, existing reward model evaluations often fail to capture alignment effectiveness in long-horizon interventions due to limited coverage of process-oriented datasets and misalignment between evaluation targets and psychological alignment objectives. To address these limitations, we present PRMB, a comprehensive benchmark tailored for evaluating reward models in multi-session CBT counseling. PRMB spans 6 sessions and 21 diverse negative scenarios, incorporating both pairwise and Best-of-N preference evaluations. We demonstrate a positive correlation between our benchmark and downstream counseling dialogue performance. Based on our benchmark, we conduct extensive analysis on the state-of-the-art reward models, revealing their generalization defects that were not discovered by previous benchmarks and highlighting the potential of generative reward models. Furthermore, we delve into examining the effectiveness of inference-time strategy for the evaluation of reward models and analyzing the impact factors of generative reward models. This work advances intelligent informatics for personalized healthcare by establishing a framework for reward model assessment in mental health dialogues. Evaluation code and datasets are publicly available at https://github.com/YouKenChaw/PRMB</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.DB'/>\n    <published>2026-03-12T03:26:29Z</published>\n    <arxiv:primary_category term='cs.DB'/>\n    <author>\n      <name>Yougen Zhou</name>\n    </author>\n    <author>\n      <name>Qin Chen</name>\n    </author>\n    <author>\n      <name>Ningning Zhou</name>\n    </author>\n    <author>\n      <name>Jie Zhou</name>\n    </author>\n    <author>\n      <name>Liang He</name>\n    </author>\n  </entry>"
}