Research

Paper

AI LLM March 17, 2026

BenchPreS: A Benchmark for Context-Aware Personalized Preference Selectivity of Persistent-Memory LLMs

Authors

Sangyeon Yoon, Sunkyoung Kim, Hyesoo Hong, Wonje Jeung, Yongil Kim, Wooseok Seo, Heuiyeen Yeen, Albert No

Abstract

Large language models (LLMs) increasingly store user preferences in persistent memory to support personalization across interactions. However, in third-party communication settings governed by social and institutional norms, some user preferences may be inappropriate to apply. We introduce BenchPreS, which evaluates whether memory-based user preferences are appropriately applied or suppressed across communication contexts. Using two complementary metrics, Misapplication Rate (MR) and Appropriate Application Rate (AAR), we find even frontier LLMs struggle to apply preferences in a context-sensitive manner. Models with stronger preference adherence exhibit higher rates of over-application, and neither reasoning capability nor prompt-based defenses fully resolve this issue. These results suggest current LLMs treat personalized preferences as globally enforceable rules rather than as context-dependent normative signals.

Metadata

arXiv ID: 2603.16557
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-03-17
Fetched: 2026-03-18 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.16557v1</id>\n    <title>BenchPreS: A Benchmark for Context-Aware Personalized Preference Selectivity of Persistent-Memory LLMs</title>\n    <updated>2026-03-17T14:19:05Z</updated>\n    <link href='https://arxiv.org/abs/2603.16557v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.16557v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large language models (LLMs) increasingly store user preferences in persistent memory to support personalization across interactions. However, in third-party communication settings governed by social and institutional norms, some user preferences may be inappropriate to apply. We introduce BenchPreS, which evaluates whether memory-based user preferences are appropriately applied or suppressed across communication contexts. Using two complementary metrics, Misapplication Rate (MR) and Appropriate Application Rate (AAR), we find even frontier LLMs struggle to apply preferences in a context-sensitive manner. Models with stronger preference adherence exhibit higher rates of over-application, and neither reasoning capability nor prompt-based defenses fully resolve this issue. These results suggest current LLMs treat personalized preferences as globally enforceable rules rather than as context-dependent normative signals.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-17T14:19:05Z</published>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Sangyeon Yoon</name>\n    </author>\n    <author>\n      <name>Sunkyoung Kim</name>\n    </author>\n    <author>\n      <name>Hyesoo Hong</name>\n    </author>\n    <author>\n      <name>Wonje Jeung</name>\n    </author>\n    <author>\n      <name>Yongil Kim</name>\n    </author>\n    <author>\n      <name>Wooseok Seo</name>\n    </author>\n    <author>\n      <name>Heuiyeen Yeen</name>\n    </author>\n    <author>\n      <name>Albert No</name>\n    </author>\n  </entry>"
}