Research

Paper

AI LLM March 16, 2026

Effective Distillation to Hybrid xLSTM Architectures

Authors

Lukas Hauzenberger, Niklas Schmidinger, Thomas Schmied, Anamaria-Roberta Hartl, David Stap, Pieter-Jan Hoedt, Maximilian Beck, Sebastian Böck, Günter Klambauer, Sepp Hochreiter

Abstract

There have been numerous attempts to distill quadratic attention-based large language models (LLMs) into sub-quadratic linearized architectures. However, despite extensive research, such distilled models often fail to match the performance of their teacher LLMs on various downstream tasks. We set out the goal of lossless distillation, which we define in terms of tolerance-corrected Win-and-Tie rates between student and teacher on sets of tasks. To this end, we introduce an effective distillation pipeline for xLSTM-based students. We propose an additional merging stage, where individually linearized experts are combined into a single model. We show the effectiveness of this pipeline by distilling base and instruction-tuned models from the Llama, Qwen, and Olmo families. In many settings, our xLSTM-based students recover most of the teacher's performance, and even exceed it on some downstream tasks. Our contributions are an important step towards more energy-efficient and cost-effective replacements for transformer-based LLMs.

Metadata

arXiv ID: 2603.15590
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-16
Fetched: 2026-03-17 06:02

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.15590v1</id>\n    <title>Effective Distillation to Hybrid xLSTM Architectures</title>\n    <updated>2026-03-16T17:49:04Z</updated>\n    <link href='https://arxiv.org/abs/2603.15590v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.15590v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>There have been numerous attempts to distill quadratic attention-based large language models (LLMs) into sub-quadratic linearized architectures. However, despite extensive research, such distilled models often fail to match the performance of their teacher LLMs on various downstream tasks. We set out the goal of lossless distillation, which we define in terms of tolerance-corrected Win-and-Tie rates between student and teacher on sets of tasks. To this end, we introduce an effective distillation pipeline for xLSTM-based students. We propose an additional merging stage, where individually linearized experts are combined into a single model. We show the effectiveness of this pipeline by distilling base and instruction-tuned models from the Llama, Qwen, and Olmo families. In many settings, our xLSTM-based students recover most of the teacher's performance, and even exceed it on some downstream tasks. Our contributions are an important step towards more energy-efficient and cost-effective replacements for transformer-based LLMs.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <published>2026-03-16T17:49:04Z</published>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Lukas Hauzenberger</name>\n    </author>\n    <author>\n      <name>Niklas Schmidinger</name>\n    </author>\n    <author>\n      <name>Thomas Schmied</name>\n    </author>\n    <author>\n      <name>Anamaria-Roberta Hartl</name>\n    </author>\n    <author>\n      <name>David Stap</name>\n    </author>\n    <author>\n      <name>Pieter-Jan Hoedt</name>\n    </author>\n    <author>\n      <name>Maximilian Beck</name>\n    </author>\n    <author>\n      <name>Sebastian Böck</name>\n    </author>\n    <author>\n      <name>Günter Klambauer</name>\n    </author>\n    <author>\n      <name>Sepp Hochreiter</name>\n    </author>\n  </entry>"
}