Research

Paper

TESTING March 18, 2026

QuantFL: Sustainable Federated Learning for Edge IoT via Pre-Trained Model Quantisation

Authors

Charuka Herath, Yogachandran Rahulamathavan, Varuna De Silva, Sangarapillai Lambotharan

Abstract

Federated Learning (FL) enables privacy-preserving intelligence on Internet of Things (IoT) devices but incurs a significant carbon footprint due to the high energy cost of frequent uplink transmission. While pre-trained models are increasingly available on edge devices, their potential to reduce the energy overhead of fine-tuning remains underexplored. In this work, we propose QuantFL, a sustainable FL framework that leverages pre-trained initialisation to enable aggressive, computationally lightweight quantisation. We demonstrate that pre-training naturally concentrates update statistics, allowing us to use memory-efficient bucket quantisation without the energy-intensive overhead of complex error-feedback mechanisms. On MNIST and CIFAR-100, QuantFL reduces total communication by 40\% ($\simeq40\%$ total-bit reduction with full-precision downlink; $\geq80\%$ on uplink or when downlink is quantised) while matching or exceeding uncompressed baselines under strict bandwidth budgets; BU attains 89.00\% (MNIST) and 66.89\% (CIFAR-100) test accuracy with orders of magnitude fewer bits. We also account for uplink and downlink costs and provide ablations on quantisation levels and initialisation. QuantFL delivers a practical, "green" recipe for scalable training on battery-constrained IoT networks.

Metadata

arXiv ID: 2603.17507
Provider: ARXIV
Primary Category: cs.LG
Published: 2026-03-18
Fetched: 2026-03-19 06:01

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.17507v1</id>\n    <title>QuantFL: Sustainable Federated Learning for Edge IoT via Pre-Trained Model Quantisation</title>\n    <updated>2026-03-18T09:08:28Z</updated>\n    <link href='https://arxiv.org/abs/2603.17507v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.17507v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Federated Learning (FL) enables privacy-preserving intelligence on Internet of Things (IoT) devices but incurs a significant carbon footprint due to the high energy cost of frequent uplink transmission. While pre-trained models are increasingly available on edge devices, their potential to reduce the energy overhead of fine-tuning remains underexplored. In this work, we propose QuantFL, a sustainable FL framework that leverages pre-trained initialisation to enable aggressive, computationally lightweight quantisation. We demonstrate that pre-training naturally concentrates update statistics, allowing us to use memory-efficient bucket quantisation without the energy-intensive overhead of complex error-feedback mechanisms. On MNIST and CIFAR-100, QuantFL reduces total communication by 40\\% ($\\simeq40\\%$ total-bit reduction with full-precision downlink; $\\geq80\\%$ on uplink or when downlink is quantised) while matching or exceeding uncompressed baselines under strict bandwidth budgets; BU attains 89.00\\% (MNIST) and 66.89\\% (CIFAR-100) test accuracy with orders of magnitude fewer bits. We also account for uplink and downlink costs and provide ablations on quantisation levels and initialisation. QuantFL delivers a practical, \"green\" recipe for scalable training on battery-constrained IoT networks.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.LG'/>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.AI'/>\n    <published>2026-03-18T09:08:28Z</published>\n    <arxiv:primary_category term='cs.LG'/>\n    <author>\n      <name>Charuka Herath</name>\n    </author>\n    <author>\n      <name>Yogachandran Rahulamathavan</name>\n    </author>\n    <author>\n      <name>Varuna De Silva</name>\n    </author>\n    <author>\n      <name>Sangarapillai Lambotharan</name>\n    </author>\n  </entry>"
}