Research

Paper

TESTING March 02, 2026

Modular Memory is the Key to Continual Learning Agents

Authors

Vaggelis Dorovatas, Malte Schwerin, Andrew D. Bagdanov, Lucas Caccia, Antonio Carta, Laurent Charlin, Barbara Hammer, Tyler L. Hayes, Timm Hess, Christopher Kanan, Dhireesha Kudithipudi, Xialei Liu, Vincenzo Lomonaco, Jorge Mendez-Mendez, Darshan Patil, Ameya Prabhu, Elisa Ricci, Tinne Tuytelaars, Gido M. van de Ven, Liyuan Wang, Joost van de Weijer, Jonghyun Choi, Martin Mundt, Rahaf Aljundi

Abstract

Foundation models have transformed machine learning through large-scale pretraining and increased test-time compute. Despite surpassing human performance in several domains, these models remain fundamentally limited in continuous operation, experience accumulation, and personalization, capabilities that are central to adaptive intelligence. While continual learning research has long targeted these goals, its historical focus on in-weight learning (IWL), i.e., updating a single model's parameters to absorb new knowledge, has rendered catastrophic forgetting a persistent challenge. Our position is that combining the strengths of In-Weight Learning (IWL) and the newly emerged capabilities of In-Context Learning (ICL) through the design of modular memory is the missing piece for continual adaptation at scale. We outline a conceptual framework for modular memory-centric architectures that leverage ICL for rapid adaptation and knowledge accumulation, and IWL for stable updates to model capabilities, charting a practical roadmap toward continually learning agents.

Metadata

arXiv ID: 2603.01761
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-02
Fetched: 2026-03-03 04:34

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.01761v1</id>\n    <title>Modular Memory is the Key to Continual Learning Agents</title>\n    <updated>2026-03-02T11:40:05Z</updated>\n    <link href='https://arxiv.org/abs/2603.01761v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.01761v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Foundation models have transformed machine learning through large-scale pretraining and increased test-time compute. Despite surpassing human performance in several domains, these models remain fundamentally limited in continuous operation, experience accumulation, and personalization, capabilities that are central to adaptive intelligence. While continual learning research has long targeted these goals, its historical focus on in-weight learning (IWL), i.e., updating a single model's parameters to absorb new knowledge, has rendered catastrophic forgetting a persistent challenge. Our position is that combining the strengths of In-Weight Learning (IWL) and the newly emerged capabilities of In-Context Learning (ICL) through the design of modular memory is the missing piece for continual adaptation at scale. We outline a conceptual framework for modular memory-centric architectures that leverage ICL for rapid adaptation and knowledge accumulation, and IWL for stable updates to model capabilities, charting a practical roadmap toward continually learning agents.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-02T11:40:05Z</published>\n    <arxiv:comment>This work stems from discussions held at the Dagstuhl seminar on Continual Learning in the Era of Foundation Models (October 2025)</arxiv:comment>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Vaggelis Dorovatas</name>\n    </author>\n    <author>\n      <name>Malte Schwerin</name>\n    </author>\n    <author>\n      <name>Andrew D. Bagdanov</name>\n    </author>\n    <author>\n      <name>Lucas Caccia</name>\n    </author>\n    <author>\n      <name>Antonio Carta</name>\n    </author>\n    <author>\n      <name>Laurent Charlin</name>\n    </author>\n    <author>\n      <name>Barbara Hammer</name>\n    </author>\n    <author>\n      <name>Tyler L. Hayes</name>\n    </author>\n    <author>\n      <name>Timm Hess</name>\n    </author>\n    <author>\n      <name>Christopher Kanan</name>\n    </author>\n    <author>\n      <name>Dhireesha Kudithipudi</name>\n    </author>\n    <author>\n      <name>Xialei Liu</name>\n    </author>\n    <author>\n      <name>Vincenzo Lomonaco</name>\n    </author>\n    <author>\n      <name>Jorge Mendez-Mendez</name>\n    </author>\n    <author>\n      <name>Darshan Patil</name>\n    </author>\n    <author>\n      <name>Ameya Prabhu</name>\n    </author>\n    <author>\n      <name>Elisa Ricci</name>\n    </author>\n    <author>\n      <name>Tinne Tuytelaars</name>\n    </author>\n    <author>\n      <name>Gido M. van de Ven</name>\n    </author>\n    <author>\n      <name>Liyuan Wang</name>\n    </author>\n    <author>\n      <name>Joost van de Weijer</name>\n    </author>\n    <author>\n      <name>Jonghyun Choi</name>\n    </author>\n    <author>\n      <name>Martin Mundt</name>\n    </author>\n    <author>\n      <name>Rahaf Aljundi</name>\n    </author>\n  </entry>"
}