{"title":"用于通信高效联邦学习的量化压缩感知","authors":"Yong-Nam Oh, N. Lee, Yo-Seb Jeon","doi":"10.1109/GCWkshps52748.2021.9682076","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) is a decentralized artificial intelligence technique for training a global model on a parameter server (PS) through collaboration with wireless devices, each with its own local training data set. In this paper, we present a communication-efficient FL framework which consists of gradient compression and reconstruction strategies based on quantized compressed sensing (QCS). The key idea of the gradient compression strategy is to compress-and-quantize a local gradient vector computed at each device after sparsifying this vector in a block wise fashion. Our gradient compression strategy can make communication overhead less than one bit per gradient entry. For accurate reconstruction of the local gradient from the compressed signals at the PS, we employ a expectation-maximization generalized-approximate-message-passing algorithm. The algorithm iteratively computes an approximate minimum mean square error solution of the local gradient, while learning the unknown model parameters of the Bernoulli Gaussian-mixture prior. Using the MNIST data set, we demonstrate that the presented FL framework can achieve almost identical classification performance with the case that performs no compression, while achieving a significant reduction of communication overhead.","PeriodicalId":6802,"journal":{"name":"2021 IEEE Globecom Workshops (GC Wkshps)","volume":"42 1","pages":"1-6"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Quantized Compressed Sensing for Communication-Efficient Federated Learning\",\"authors\":\"Yong-Nam Oh, N. Lee, Yo-Seb Jeon\",\"doi\":\"10.1109/GCWkshps52748.2021.9682076\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning (FL) is a decentralized artificial intelligence technique for training a global model on a parameter server (PS) through collaboration with wireless devices, each with its own local training data set. In this paper, we present a communication-efficient FL framework which consists of gradient compression and reconstruction strategies based on quantized compressed sensing (QCS). The key idea of the gradient compression strategy is to compress-and-quantize a local gradient vector computed at each device after sparsifying this vector in a block wise fashion. Our gradient compression strategy can make communication overhead less than one bit per gradient entry. For accurate reconstruction of the local gradient from the compressed signals at the PS, we employ a expectation-maximization generalized-approximate-message-passing algorithm. The algorithm iteratively computes an approximate minimum mean square error solution of the local gradient, while learning the unknown model parameters of the Bernoulli Gaussian-mixture prior. Using the MNIST data set, we demonstrate that the presented FL framework can achieve almost identical classification performance with the case that performs no compression, while achieving a significant reduction of communication overhead.\",\"PeriodicalId\":6802,\"journal\":{\"name\":\"2021 IEEE Globecom Workshops (GC Wkshps)\",\"volume\":\"42 1\",\"pages\":\"1-6\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Globecom Workshops (GC Wkshps)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/GCWkshps52748.2021.9682076\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Globecom Workshops (GC Wkshps)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GCWkshps52748.2021.9682076","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Quantized Compressed Sensing for Communication-Efficient Federated Learning
Federated learning (FL) is a decentralized artificial intelligence technique for training a global model on a parameter server (PS) through collaboration with wireless devices, each with its own local training data set. In this paper, we present a communication-efficient FL framework which consists of gradient compression and reconstruction strategies based on quantized compressed sensing (QCS). The key idea of the gradient compression strategy is to compress-and-quantize a local gradient vector computed at each device after sparsifying this vector in a block wise fashion. Our gradient compression strategy can make communication overhead less than one bit per gradient entry. For accurate reconstruction of the local gradient from the compressed signals at the PS, we employ a expectation-maximization generalized-approximate-message-passing algorithm. The algorithm iteratively computes an approximate minimum mean square error solution of the local gradient, while learning the unknown model parameters of the Bernoulli Gaussian-mixture prior. Using the MNIST data set, we demonstrate that the presented FL framework can achieve almost identical classification performance with the case that performs no compression, while achieving a significant reduction of communication overhead.