SVDFed: Enabling Communication-Efficient Federated Learning via Singular-Value-Decomposition

Hao Wang, Xuefeng Liu, Jianwei Niu, Shaojie Tang
{"title":"SVDFed: Enabling Communication-Efficient Federated Learning via Singular-Value-Decomposition","authors":"Hao Wang, Xuefeng Liu, Jianwei Niu, Shaojie Tang","doi":"10.1109/INFOCOM53939.2023.10229042","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) is an emerging paradigm of distributed machine learning. However, when applied to wireless network scenarios, FL usually suffers from high communication cost because clients need to transmit their updated gradients to a server in every training round. Although many gradient compression techniques like sparsification and quantization are proposed, they compress clients’ gradients independently, without considering the correlations among gradients. In this paper, we propose SVDFed, a collaborative gradient compression framework for FL. SVDFed utilizes Singular Value Decomposition (SVD) to find a few basis vectors, whose linear combination can well represent clients’ gradients at a certain round. Due to the correlations among gradients, these basis vectors can still well approximate new gradients in many subsequent rounds. With the help of basis vectors, clients only need to upload the coefficients of the linear combination to the server, which greatly reduces communication cost. In addition, SVDFed leverages the classical PID (Proportional, Integral, Derivative) control to determine the proper time to update basis vectors to maintain their representation ability. Through experiments, we demonstrate that SVDFed outperforms existing gradient compression methods in FL. For example, compared to a popular gradient quantization method QSGD, SVDFed can reduce the communication overhead by 66 % and pending time by 99 %.","PeriodicalId":387707,"journal":{"name":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFOCOM53939.2023.10229042","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Federated learning (FL) is an emerging paradigm of distributed machine learning. However, when applied to wireless network scenarios, FL usually suffers from high communication cost because clients need to transmit their updated gradients to a server in every training round. Although many gradient compression techniques like sparsification and quantization are proposed, they compress clients’ gradients independently, without considering the correlations among gradients. In this paper, we propose SVDFed, a collaborative gradient compression framework for FL. SVDFed utilizes Singular Value Decomposition (SVD) to find a few basis vectors, whose linear combination can well represent clients’ gradients at a certain round. Due to the correlations among gradients, these basis vectors can still well approximate new gradients in many subsequent rounds. With the help of basis vectors, clients only need to upload the coefficients of the linear combination to the server, which greatly reduces communication cost. In addition, SVDFed leverages the classical PID (Proportional, Integral, Derivative) control to determine the proper time to update basis vectors to maintain their representation ability. Through experiments, we demonstrate that SVDFed outperforms existing gradient compression methods in FL. For example, compared to a popular gradient quantization method QSGD, SVDFed can reduce the communication overhead by 66 % and pending time by 99 %.
SVDFed:通过奇异值分解实现高效通信的联邦学习
联邦学习(FL)是分布式机器学习的一个新兴范例。然而,当应用于无线网络场景时,由于客户端需要在每一轮训练中将其更新的梯度传输到服务器,因此FL通常存在较高的通信成本。虽然提出了许多梯度压缩技术,如稀疏化和量化,但它们都是独立地压缩客户端梯度,而没有考虑梯度之间的相关性。在本文中,我们提出了一种用于FL的协同梯度压缩框架SVDFed, SVDFed利用奇异值分解(SVD)找到几个基向量,这些基向量的线性组合可以很好地表示某一轮客户端的梯度。由于梯度之间的相关性,这些基向量在随后的许多轮中仍然可以很好地近似新的梯度。借助基向量,客户端只需将线性组合的系数上传到服务器,大大降低了通信成本。此外,SVDFed利用经典的PID(比例、积分、导数)控制来确定更新基向量的适当时间,以保持其表示能力。通过实验,我们证明了SVDFed在FL中优于现有的梯度压缩方法。例如,与流行的梯度量化方法QSGD相比,SVDFed可以将通信开销减少66%,等待时间减少99%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信