Research

Paper

AI LLM February 26, 2026

AMA-Bench: Evaluating Long-Horizon Memory for Agentic Applications

Authors

Yujie Zhao, Boqin Yuan, Junbo Huang, Haocheng Yuan, Zhongming Yu, Haozhou Xu, Lanxiang Hu, Abhilash Shankarampeta, Zimeng Huang, Wentao Ni, Yuandong Tian, Jishen Zhao

Abstract

Large Language Models (LLMs) are deployed as autonomous agents in increasingly complex applications, where enabling long-horizon memory is critical for achieving strong performance. However, a significant gap exists between practical applications and current evaluation standards for agent memory: existing benchmarks primarily focus on dialogue-centric, human-agent interactions. In reality, agent memory consists of a continuous stream of agent-environment interactions that are primarily composed of machine-generated representations. To bridge this gap, we introduce AMA-Bench (Agent Memory with Any length), which evaluates long-horizon memory for LLMs in real agentic applications. It features two key components: (1) a set of real-world agentic trajectories across representative agentic applications, paired with expert-curated QA, and (2) a set of synthetic agentic trajectories that scale to arbitrary horizons, paired with rule-based QA. Our comprehensive study shows that existing memory systems underperform on AMA-Bench primarily because they lack causality and objective information and are constrained by the lossy nature of similarity-based retrieval employed by many memory systems. To address these limitations, we propose AMA-Agent, an effective memory system featuring a causality graph and tool-augmented retrieval. Our results demonstrate that AMA-Agent achieves 57.22% average accuracy on AMA-Bench, surpassing the strongest memory system baselines by 11.16%.

Metadata

arXiv ID: 2602.22769
Provider: ARXIV
Primary Category: cs.AI
Published: 2026-02-26
Fetched: 2026-02-27 04:35

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.22769v1</id>\n    <title>AMA-Bench: Evaluating Long-Horizon Memory for Agentic Applications</title>\n    <updated>2026-02-26T08:59:31Z</updated>\n    <link href='https://arxiv.org/abs/2602.22769v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.22769v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Large Language Models (LLMs) are deployed as autonomous agents in increasingly complex applications, where enabling long-horizon memory is critical for achieving strong performance. However, a significant gap exists between practical applications and current evaluation standards for agent memory: existing benchmarks primarily focus on dialogue-centric, human-agent interactions. In reality, agent memory consists of a continuous stream of agent-environment interactions that are primarily composed of machine-generated representations. To bridge this gap, we introduce AMA-Bench (Agent Memory with Any length), which evaluates long-horizon memory for LLMs in real agentic applications. It features two key components: (1) a set of real-world agentic trajectories across representative agentic applications, paired with expert-curated QA, and (2) a set of synthetic agentic trajectories that scale to arbitrary horizons, paired with rule-based QA. Our comprehensive study shows that existing memory systems underperform on AMA-Bench primarily because they lack causality and objective information and are constrained by the lossy nature of similarity-based retrieval employed by many memory systems. To address these limitations, we propose AMA-Agent, an effective memory system featuring a causality graph and tool-augmented retrieval. Our results demonstrate that AMA-Agent achieves 57.22% average accuracy on AMA-Bench, surpassing the strongest memory system baselines by 11.16%.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-02-26T08:59:31Z</published>\n    <arxiv:primary_category term='cs.AI'/>\n    <author>\n      <name>Yujie Zhao</name>\n    </author>\n    <author>\n      <name>Boqin Yuan</name>\n    </author>\n    <author>\n      <name>Junbo Huang</name>\n    </author>\n    <author>\n      <name>Haocheng Yuan</name>\n    </author>\n    <author>\n      <name>Zhongming Yu</name>\n    </author>\n    <author>\n      <name>Haozhou Xu</name>\n    </author>\n    <author>\n      <name>Lanxiang Hu</name>\n    </author>\n    <author>\n      <name>Abhilash Shankarampeta</name>\n    </author>\n    <author>\n      <name>Zimeng Huang</name>\n    </author>\n    <author>\n      <name>Wentao Ni</name>\n    </author>\n    <author>\n      <name>Yuandong Tian</name>\n    </author>\n    <author>\n      <name>Jishen Zhao</name>\n    </author>\n  </entry>"
}