Alexander Herzog;Robbie Southam;Othmane Belarbi;Saif Anwar;Marcello Bullo;Pietro Carnelli;Aftab Khan
{"title":"Selective Updates and Adaptive Masking for Communication-Efficient Federated Learning","authors":"Alexander Herzog;Robbie Southam;Othmane Belarbi;Saif Anwar;Marcello Bullo;Pietro Carnelli;Aftab Khan","doi":"10.1109/TGCN.2024.3349697","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) is fast becoming one of the most prevalent distributed learning techniques focused on privacy preservation and communication efficiency for large-scale Internet of Things (IoT) deployments. FL is a distributed learning approach to training models on distributed devices. Since local data remains on-device, communication through the network is reduced. However, in large-scale IoT environments or resource constrained networks, typical FL approaches significantly suffer in performance due to longer communication times. In this paper, we propose two methods for further reducing communication volume in resource restricted FL deployments. In our first method, which we term Selective Updates (SU), local models are trained until a dynamic threshold on model performance is surpassed before sending updates to a centralised Parameter Server (PS). This allows for minimal updates being transmitted, thus reducing communication overheads. Our second method, Adaptive Masking (AM), performs parameter masking on both the global and local models prior to sharing. With AM, we select model parameters that have changed the most between training rounds. We extensively evaluate our proposed methods against state-of-the-art communication reduction strategies using two common benchmark datasets, and under different communication constrained settings. Our proposed methods reduce the overall communication volume by over 20%, without affecting the model accuracy.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"8 2","pages":"852-864"},"PeriodicalIF":5.3000,"publicationDate":"2024-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Green Communications and Networking","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10380759/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Federated Learning (FL) is fast becoming one of the most prevalent distributed learning techniques focused on privacy preservation and communication efficiency for large-scale Internet of Things (IoT) deployments. FL is a distributed learning approach to training models on distributed devices. Since local data remains on-device, communication through the network is reduced. However, in large-scale IoT environments or resource constrained networks, typical FL approaches significantly suffer in performance due to longer communication times. In this paper, we propose two methods for further reducing communication volume in resource restricted FL deployments. In our first method, which we term Selective Updates (SU), local models are trained until a dynamic threshold on model performance is surpassed before sending updates to a centralised Parameter Server (PS). This allows for minimal updates being transmitted, thus reducing communication overheads. Our second method, Adaptive Masking (AM), performs parameter masking on both the global and local models prior to sharing. With AM, we select model parameters that have changed the most between training rounds. We extensively evaluate our proposed methods against state-of-the-art communication reduction strategies using two common benchmark datasets, and under different communication constrained settings. Our proposed methods reduce the overall communication volume by over 20%, without affecting the model accuracy.