{"title":"SVDFed:通过奇异值分解实现高效通信的联邦学习","authors":"Hao Wang, Xuefeng Liu, Jianwei Niu, Shaojie Tang","doi":"10.1109/INFOCOM53939.2023.10229042","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) is an emerging paradigm of distributed machine learning. However, when applied to wireless network scenarios, FL usually suffers from high communication cost because clients need to transmit their updated gradients to a server in every training round. Although many gradient compression techniques like sparsification and quantization are proposed, they compress clients’ gradients independently, without considering the correlations among gradients. In this paper, we propose SVDFed, a collaborative gradient compression framework for FL. SVDFed utilizes Singular Value Decomposition (SVD) to find a few basis vectors, whose linear combination can well represent clients’ gradients at a certain round. Due to the correlations among gradients, these basis vectors can still well approximate new gradients in many subsequent rounds. With the help of basis vectors, clients only need to upload the coefficients of the linear combination to the server, which greatly reduces communication cost. In addition, SVDFed leverages the classical PID (Proportional, Integral, Derivative) control to determine the proper time to update basis vectors to maintain their representation ability. Through experiments, we demonstrate that SVDFed outperforms existing gradient compression methods in FL. For example, compared to a popular gradient quantization method QSGD, SVDFed can reduce the communication overhead by 66 % and pending time by 99 %.","PeriodicalId":387707,"journal":{"name":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SVDFed: Enabling Communication-Efficient Federated Learning via Singular-Value-Decomposition\",\"authors\":\"Hao Wang, Xuefeng Liu, Jianwei Niu, Shaojie Tang\",\"doi\":\"10.1109/INFOCOM53939.2023.10229042\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning (FL) is an emerging paradigm of distributed machine learning. However, when applied to wireless network scenarios, FL usually suffers from high communication cost because clients need to transmit their updated gradients to a server in every training round. Although many gradient compression techniques like sparsification and quantization are proposed, they compress clients’ gradients independently, without considering the correlations among gradients. In this paper, we propose SVDFed, a collaborative gradient compression framework for FL. SVDFed utilizes Singular Value Decomposition (SVD) to find a few basis vectors, whose linear combination can well represent clients’ gradients at a certain round. Due to the correlations among gradients, these basis vectors can still well approximate new gradients in many subsequent rounds. With the help of basis vectors, clients only need to upload the coefficients of the linear combination to the server, which greatly reduces communication cost. In addition, SVDFed leverages the classical PID (Proportional, Integral, Derivative) control to determine the proper time to update basis vectors to maintain their representation ability. Through experiments, we demonstrate that SVDFed outperforms existing gradient compression methods in FL. For example, compared to a popular gradient quantization method QSGD, SVDFed can reduce the communication overhead by 66 % and pending time by 99 %.\",\"PeriodicalId\":387707,\"journal\":{\"name\":\"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications\",\"volume\":\"48 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/INFOCOM53939.2023.10229042\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFOCOM53939.2023.10229042","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
SVDFed: Enabling Communication-Efficient Federated Learning via Singular-Value-Decomposition
Federated learning (FL) is an emerging paradigm of distributed machine learning. However, when applied to wireless network scenarios, FL usually suffers from high communication cost because clients need to transmit their updated gradients to a server in every training round. Although many gradient compression techniques like sparsification and quantization are proposed, they compress clients’ gradients independently, without considering the correlations among gradients. In this paper, we propose SVDFed, a collaborative gradient compression framework for FL. SVDFed utilizes Singular Value Decomposition (SVD) to find a few basis vectors, whose linear combination can well represent clients’ gradients at a certain round. Due to the correlations among gradients, these basis vectors can still well approximate new gradients in many subsequent rounds. With the help of basis vectors, clients only need to upload the coefficients of the linear combination to the server, which greatly reduces communication cost. In addition, SVDFed leverages the classical PID (Proportional, Integral, Derivative) control to determine the proper time to update basis vectors to maintain their representation ability. Through experiments, we demonstrate that SVDFed outperforms existing gradient compression methods in FL. For example, compared to a popular gradient quantization method QSGD, SVDFed can reduce the communication overhead by 66 % and pending time by 99 %.