Research

Paper

AI LLM March 05, 2026

POET-X: Memory-efficient LLM Training by Scaling Orthogonal Transformation

Authors

Zeju Qiu, Lixin Liu, Adrian Weller, Han Shi, Weiyang Liu

Abstract

Efficient and stable training of large language models (LLMs) remains a core challenge in modern machine learning systems. To address this challenge, Reparameterized Orthogonal Equivalence Training (POET), a spectrum-preserving framework that optimizes each weight matrix through orthogonal equivalence transformation, has been proposed. Although POET provides strong training stability, its original implementation incurs high memory consumption and computational overhead due to intensive matrix multiplications. To overcome these limitations, we introduce POET-X, a scalable and memory-efficient variant that performs orthogonal equivalence transformations with significantly reduced computational cost. POET-X maintains the generalization and stability benefits of POET while achieving substantial improvements in throughput and memory efficiency. In our experiments, POET-X enables the pretraining of billion-parameter LLMs on a single Nvidia H100 GPU, and in contrast, standard optimizers such as AdamW run out of memory under the same settings.

Metadata

arXiv ID: 2603.05500
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-05
Fetched: 2026-03-06 14:20

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.05500v1</id>\n    <title>POET-X: Memory-efficient LLM Training by Scaling Orthogonal Transformation</title>\n    <updated>2026-03-05T18:59:23Z</updated>\n    <link href='https://arxiv.org/abs/2603.05500v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.05500v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Efficient and stable training of large language models (LLMs) remains a core challenge in modern machine learning systems. To address this challenge, Reparameterized Orthogonal Equivalence Training (POET), a spectrum-preserving framework that optimizes each weight matrix through orthogonal equivalence transformation, has been proposed. Although POET provides strong training stability, its original implementation incurs high memory consumption and computational overhead due to intensive matrix multiplications. To overcome these limitations, we introduce POET-X, a scalable and memory-efficient variant that performs orthogonal equivalence transformations with significantly reduced computational cost. POET-X maintains the generalization and stability benefits of POET while achieving substantial improvements in throughput and memory efficiency. In our experiments, POET-X enables the pretraining of billion-parameter LLMs on a single Nvidia H100 GPU, and in contrast, standard optimizers such as AdamW run out of memory under the same settings.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CL'/>\n    <published>2026-03-05T18:59:23Z</published>\n    <arxiv:comment>Technical report v1 (14 pages, 7 figures, project page: https://spherelab.ai/poetx/)</arxiv:comment>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Zeju Qiu</name>\n    </author>\n    <author>\n      <name>Lixin Liu</name>\n    </author>\n    <author>\n      <name>Adrian Weller</name>\n    </author>\n    <author>\n      <name>Han Shi</name>\n    </author>\n    <author>\n      <name>Weiyang Liu</name>\n    </author>\n  </entry>"
}