Paper
Lagom: Unleashing the Power of Communication and Computation Overlapping for Distributed LLM Training
Authors
Guanbin Xu, ZhenGuo Xu, Yuzhe Li, Youhui Bai, Ping Gong, Chaoyi Ruan, Cheng Li
Abstract
Overlapping communication with computation is crucial for distributed large-model training, yet optimizing it - especially when computation becomes the bottleneck-remains challenging. We present Lagom, a system that co-tunes communication parameters to balance resource usage between computation and communication. By introducing a unified cost model and a priority-based search algorithm, Lagom reduces optimization complexity from exponential to linear. Evaluations on high- and low-bandwidth GPU clusters show that Lagom achieves 1.07-1.33x and 1.03-1.27x speedup over NCCL and AutoCCL across diverse models and parallelizations.
Metadata
Related papers
Vibe Coding XR: Accelerating AI + XR Prototyping with XR Blocks and Gemini
Ruofei Du, Benjamin Hersh, David Li, Nels Numan, Xun Qian, Yanhe Chen, Zhongy... • 2026-03-25
Comparing Developer and LLM Biases in Code Evaluation
Aditya Mittal, Ryan Shar, Zichu Wu, Shyam Agarwal, Tongshuang Wu, Chris Donah... • 2026-03-25
The Stochastic Gap: A Markovian Framework for Pre-Deployment Reliability and Oversight-Cost Auditing in Agentic Artificial Intelligence
Biplab Pal, Santanu Bhattacharya • 2026-03-25
Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA
Saahil Mathur, Ryan David Rittner, Vedant Ajit Thakur, Daniel Stuart Schiff, ... • 2026-03-25
MARCH: Multi-Agent Reinforced Self-Check for LLM Hallucination
Zhuo Li, Yupeng Zhang, Pengyu Cheng, Jiajun Song, Mengyu Zhou, Hao Li, Shujie... • 2026-03-25
Raw Data (Debug)
{
"raw_xml": "<entry>\n <id>http://arxiv.org/abs/2602.20656v1</id>\n <title>Lagom: Unleashing the Power of Communication and Computation Overlapping for Distributed LLM Training</title>\n <updated>2026-02-24T08:00:38Z</updated>\n <link href='https://arxiv.org/abs/2602.20656v1' rel='alternate' type='text/html'/>\n <link href='https://arxiv.org/pdf/2602.20656v1' rel='related' title='pdf' type='application/pdf'/>\n <summary>Overlapping communication with computation is crucial for distributed large-model training, yet optimizing it - especially when computation becomes the bottleneck-remains challenging. We present Lagom, a system that co-tunes communication parameters to balance resource usage between computation and communication. By introducing a unified cost model and a priority-based search algorithm, Lagom reduces optimization complexity from exponential to linear. Evaluations on high- and low-bandwidth GPU clusters show that Lagom achieves 1.07-1.33x and 1.03-1.27x speedup over NCCL and AutoCCL across diverse models and parallelizations.</summary>\n <category scheme='http://arxiv.org/schemas/atom' term='cs.DC'/>\n <published>2026-02-24T08:00:38Z</published>\n <arxiv:comment>6 pages, 8 figures</arxiv:comment>\n <arxiv:primary_category term='cs.DC'/>\n <author>\n <name>Guanbin Xu</name>\n </author>\n <author>\n <name>ZhenGuo Xu</name>\n </author>\n <author>\n <name>Yuzhe Li</name>\n </author>\n <author>\n <name>Youhui Bai</name>\n </author>\n <author>\n <name>Ping Gong</name>\n </author>\n <author>\n <name>Chaoyi Ruan</name>\n </author>\n <author>\n <name>Cheng Li</name>\n </author>\n </entry>"
}