Research

Paper

AI LLM March 10, 2026

MSSR: Memory-Aware Adaptive Replay for Continual LLM Fine-Tuning

Authors

Yiyang Lu, Yu He, Jianlong Chen, Hongyuan Zha

Abstract

Continual fine-tuning of large language models (LLMs) is becoming increasingly crucial as these models are deployed in dynamic environments where tasks and data distributions evolve over time. While strong adaptability enables rapid acquisition of new knowledge, it also exposes LLMs to catastrophic forgetting, where previously learned skills degrade during sequential training. Existing replay-based strategies, such as fixed interleaved replay, accuracy-supervised, and loss-driven scheduling, remain limited: some depend on heuristic rules and provide only partial mitigation of forgetting, while others improve performance but incur substantial computational overhead. Motivated by retention dynamics under sequential fine-tuning, we propose Memory-Inspired Sampler and Scheduler Replay (MSSR), an experience replay framework that estimates sample-level memory strength and schedules rehearsal at adaptive intervals to mitigate catastrophic forgetting while maintaining fast adaptation. Extensive experiments across three backbone models and 11 sequential tasks show that MSSR consistently outperforms state-of-the-art replay baselines, with particularly strong gains on reasoning-intensive and multiple-choice benchmarks.

Metadata

arXiv ID: 2603.09892
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-10
Fetched: 2026-03-11 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.09892v1</id>\n    <title>MSSR: Memory-Aware Adaptive Replay for Continual LLM Fine-Tuning</title>\n    <updated>2026-03-10T16:49:44Z</updated>\n    <link href='https://arxiv.org/abs/2603.09892v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.09892v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Continual fine-tuning of large language models (LLMs) is becoming increasingly crucial as these models are deployed in dynamic environments where tasks and data distributions evolve over time. While strong adaptability enables rapid acquisition of new knowledge, it also exposes LLMs to catastrophic forgetting, where previously learned skills degrade during sequential training. Existing replay-based strategies, such as fixed interleaved replay, accuracy-supervised, and loss-driven scheduling, remain limited: some depend on heuristic rules and provide only partial mitigation of forgetting, while others improve performance but incur substantial computational overhead. Motivated by retention dynamics under sequential fine-tuning, we propose Memory-Inspired Sampler and Scheduler Replay (MSSR), an experience replay framework that estimates sample-level memory strength and schedules rehearsal at adaptive intervals to mitigate catastrophic forgetting while maintaining fast adaptation. Extensive experiments across three backbone models and 11 sequential tasks show that MSSR consistently outperforms state-of-the-art replay baselines, with particularly strong gains on reasoning-intensive and multiple-choice benchmarks.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-10T16:49:44Z</published>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Yiyang Lu</name>\n    </author>\n    <author>\n      <name>Yu He</name>\n    </author>\n    <author>\n      <name>Jianlong Chen</name>\n    </author>\n    <author>\n      <name>Hongyuan Zha</name>\n    </author>\n  </entry>"
}