Research

Paper

AI LLM March 05, 2026

MPCEval: A Benchmark for Multi-Party Conversation Generation

Authors

Minxing Zhang, Yi Yang, Zhuofan Jia, Xuan Yang, Jian Pei, Yuchen Zang, Xingwang Deng, Xianglong Chen

Abstract

Multi-party conversation generation, such as smart reply and collaborative assistants, is an increasingly important capability of generative AI, yet its evaluation remains a critical bottleneck. Compared to two-party dialogue, multi-party settings introduce distinct challenges, including complex turn-taking, role-dependent speaker behavior, long-range conversational structure, and multiple equally valid continuations. Accordingly, we introduce MPCEval, a task-aware evaluation and benchmarking suite for multi-party conversation generation. MPCEval decomposes generation quality into speaker modeling, content quality, and speaker--content consistency, and explicitly distinguishes local next-turn prediction from global full-conversation generation. It provides novel, quantitative, reference-free, and reproducible metrics that scale across datasets and models. We apply MPCEval to diverse public and real-world datasets and evaluate modern generation methods alongside human-authored conversations. The results reveal systematic, dimension-specific model characteristics in participation balance, content progression and novelty, and speaker--content consistency, demonstrating that evaluation objectives critically shape model assessment and that single-score evaluation obscures fundamental differences in multi-party conversational behavior. The implementation of MPCEval and the associated evaluation code are publicly available at https://github.com/Owen-Yang-18/MPCEval.

Metadata

arXiv ID: 2603.04969
Provider: ARXIV
Primary Category: cs.CL
Published: 2026-03-05
Fetched: 2026-03-06 14:20

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.04969v1</id>\n    <title>MPCEval: A Benchmark for Multi-Party Conversation Generation</title>\n    <updated>2026-03-05T09:07:00Z</updated>\n    <link href='https://arxiv.org/abs/2603.04969v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.04969v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Multi-party conversation generation, such as smart reply and collaborative assistants, is an increasingly important capability of generative AI, yet its evaluation remains a critical bottleneck. Compared to two-party dialogue, multi-party settings introduce distinct challenges, including complex turn-taking, role-dependent speaker behavior, long-range conversational structure, and multiple equally valid continuations. Accordingly, we introduce MPCEval, a task-aware evaluation and benchmarking suite for multi-party conversation generation. MPCEval decomposes generation quality into speaker modeling, content quality, and speaker--content consistency, and explicitly distinguishes local next-turn prediction from global full-conversation generation. It provides novel, quantitative, reference-free, and reproducible metrics that scale across datasets and models. We apply MPCEval to diverse public and real-world datasets and evaluate modern generation methods alongside human-authored conversations. The results reveal systematic, dimension-specific model characteristics in participation balance, content progression and novelty, and speaker--content consistency, demonstrating that evaluation objectives critically shape model assessment and that single-score evaluation obscures fundamental differences in multi-party conversational behavior. The implementation of MPCEval and the associated evaluation code are publicly available at https://github.com/Owen-Yang-18/MPCEval.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-05T09:07:00Z</published>\n    <arxiv:primary_category term='cs.CL'/>\n    <author>\n      <name>Minxing Zhang</name>\n    </author>\n    <author>\n      <name>Yi Yang</name>\n    </author>\n    <author>\n      <name>Zhuofan Jia</name>\n    </author>\n    <author>\n      <name>Xuan Yang</name>\n    </author>\n    <author>\n      <name>Jian Pei</name>\n    </author>\n    <author>\n      <name>Yuchen Zang</name>\n    </author>\n    <author>\n      <name>Xingwang Deng</name>\n    </author>\n    <author>\n      <name>Xianglong Chen</name>\n    </author>\n  </entry>"
}