Research

Paper

AI LLM February 26, 2026

Distributed LLM Pretraining During Renewable Curtailment Windows: A Feasibility Study

Authors

Philipp Wiesner, Soeren Becker, Brett Cornick, Dominik Scheinert, Alexander Acker, Odej Kao

Abstract

Training large language models (LLMs) requires substantial compute and energy. At the same time, renewable energy sources regularly produce more electricity than the grid can absorb, leading to curtailment, the deliberate reduction of clean generation that would otherwise go to waste. These periods represent an opportunity: if training is aligned with curtailment windows, LLMs can be pretrained using electricity that is both clean and cheap. This technical report presents a system that performs full-parameter LLM training across geo-distributed GPU clusters during regional curtailment windows, elastically switching between local single-site training and federated multi-site synchronization as sites become available or unavailable. Our prototype trains a 561M-parameter transformer model across three clusters using the Flower federated learning framework, with curtailment periods derived from real-world marginal carbon intensity traces. Preliminary results show that curtailment-aware scheduling preserves training quality while reducing operational emissions to 5-12% of single-site baselines.

Metadata

arXiv ID: 2602.22760
Provider: ARXIV
Primary Category: cs.DC
Published: 2026-02-26
Fetched: 2026-02-27 04:35

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2602.22760v1</id>\n    <title>Distributed LLM Pretraining During Renewable Curtailment Windows: A Feasibility Study</title>\n    <updated>2026-02-26T08:49:57Z</updated>\n    <link href='https://arxiv.org/abs/2602.22760v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2602.22760v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Training large language models (LLMs) requires substantial compute and energy. At the same time, renewable energy sources regularly produce more electricity than the grid can absorb, leading to curtailment, the deliberate reduction of clean generation that would otherwise go to waste. These periods represent an opportunity: if training is aligned with curtailment windows, LLMs can be pretrained using electricity that is both clean and cheap. This technical report presents a system that performs full-parameter LLM training across geo-distributed GPU clusters during regional curtailment windows, elastically switching between local single-site training and federated multi-site synchronization as sites become available or unavailable. Our prototype trains a 561M-parameter transformer model across three clusters using the Flower federated learning framework, with curtailment periods derived from real-world marginal carbon intensity traces. Preliminary results show that curtailment-aware scheduling preserves training quality while reducing operational emissions to 5-12% of single-site baselines.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.DC'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-02-26T08:49:57Z</published>\n    <arxiv:comment>Technical report</arxiv:comment>\n    <arxiv:primary_category term='cs.DC'/>\n    <author>\n      <name>Philipp Wiesner</name>\n    </author>\n    <author>\n      <name>Soeren Becker</name>\n    </author>\n    <author>\n      <name>Brett Cornick</name>\n    </author>\n    <author>\n      <name>Dominik Scheinert</name>\n    </author>\n    <author>\n      <name>Alexander Acker</name>\n    </author>\n    <author>\n      <name>Odej Kao</name>\n    </author>\n  </entry>"
}