{"title":"SQuaFL: Sketch-Quantization Inspired Communication Efficient Federated Learning","authors":"Pavana Prakash, Jiahao Ding, Minglei Shu, Junyi Wang, Wenjun Xu, Miao Pan","doi":"10.1145/3453142.3491415","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) is a fast-growing distributed learning paradigm with widespread applications especially over mobile devices, since it trains high-quality deep learning models while keeping the data private. This aspect is most suitable in multi-access edge computing settings where FL leverages distributed data from numerous mobile edge devices for training. However, FL involves frequent global synchronization of periodic updates over links often with transmission rate limits, inflicting communication burdens. Moreover, the intensive on-device computation of local updates results in computation and memory overhead on resource constricted mobile devices. To address these challenges, in this paper, we introduce SQuaFL, a sketched quantization based novel FL method which aims at communication efficiency while preserving privacy. In particular, we compress the accumulation of local gradients using quantization and Count Sketches without adding explicit noise, sacrificing the learning performance, or introducing a computation overhead. We provide theoretical guarantees of convergence of our proposed scheme and perform extensive simulations to demonstrate its efficacy over baseline methods.","PeriodicalId":6779,"journal":{"name":"2021 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"414 1","pages":"350-354"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE/ACM Symposium on Edge Computing (SEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3453142.3491415","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Federated Learning (FL) is a fast-growing distributed learning paradigm with widespread applications especially over mobile devices, since it trains high-quality deep learning models while keeping the data private. This aspect is most suitable in multi-access edge computing settings where FL leverages distributed data from numerous mobile edge devices for training. However, FL involves frequent global synchronization of periodic updates over links often with transmission rate limits, inflicting communication burdens. Moreover, the intensive on-device computation of local updates results in computation and memory overhead on resource constricted mobile devices. To address these challenges, in this paper, we introduce SQuaFL, a sketched quantization based novel FL method which aims at communication efficiency while preserving privacy. In particular, we compress the accumulation of local gradients using quantization and Count Sketches without adding explicit noise, sacrificing the learning performance, or introducing a computation overhead. We provide theoretical guarantees of convergence of our proposed scheme and perform extensive simulations to demonstrate its efficacy over baseline methods.