Selective Updates and Adaptive Masking for Communication-Efficient Federated Learning

IF 5.3 2区 计算机科学 Q1 TELECOMMUNICATIONS
Alexander Herzog;Robbie Southam;Othmane Belarbi;Saif Anwar;Marcello Bullo;Pietro Carnelli;Aftab Khan
{"title":"Selective Updates and Adaptive Masking for Communication-Efficient Federated Learning","authors":"Alexander Herzog;Robbie Southam;Othmane Belarbi;Saif Anwar;Marcello Bullo;Pietro Carnelli;Aftab Khan","doi":"10.1109/TGCN.2024.3349697","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) is fast becoming one of the most prevalent distributed learning techniques focused on privacy preservation and communication efficiency for large-scale Internet of Things (IoT) deployments. FL is a distributed learning approach to training models on distributed devices. Since local data remains on-device, communication through the network is reduced. However, in large-scale IoT environments or resource constrained networks, typical FL approaches significantly suffer in performance due to longer communication times. In this paper, we propose two methods for further reducing communication volume in resource restricted FL deployments. In our first method, which we term Selective Updates (SU), local models are trained until a dynamic threshold on model performance is surpassed before sending updates to a centralised Parameter Server (PS). This allows for minimal updates being transmitted, thus reducing communication overheads. Our second method, Adaptive Masking (AM), performs parameter masking on both the global and local models prior to sharing. With AM, we select model parameters that have changed the most between training rounds. We extensively evaluate our proposed methods against state-of-the-art communication reduction strategies using two common benchmark datasets, and under different communication constrained settings. Our proposed methods reduce the overall communication volume by over 20%, without affecting the model accuracy.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"8 2","pages":"852-864"},"PeriodicalIF":5.3000,"publicationDate":"2024-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Green Communications and Networking","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10380759/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

Federated Learning (FL) is fast becoming one of the most prevalent distributed learning techniques focused on privacy preservation and communication efficiency for large-scale Internet of Things (IoT) deployments. FL is a distributed learning approach to training models on distributed devices. Since local data remains on-device, communication through the network is reduced. However, in large-scale IoT environments or resource constrained networks, typical FL approaches significantly suffer in performance due to longer communication times. In this paper, we propose two methods for further reducing communication volume in resource restricted FL deployments. In our first method, which we term Selective Updates (SU), local models are trained until a dynamic threshold on model performance is surpassed before sending updates to a centralised Parameter Server (PS). This allows for minimal updates being transmitted, thus reducing communication overheads. Our second method, Adaptive Masking (AM), performs parameter masking on both the global and local models prior to sharing. With AM, we select model parameters that have changed the most between training rounds. We extensively evaluate our proposed methods against state-of-the-art communication reduction strategies using two common benchmark datasets, and under different communication constrained settings. Our proposed methods reduce the overall communication volume by over 20%, without affecting the model accuracy.
选择性更新和自适应屏蔽,实现通信效率高的联合学习
联盟学习(FL)正迅速成为最流行的分布式学习技术之一,其重点是在大规模物联网(IoT)部署中保护隐私和提高通信效率。分布式学习是一种在分布式设备上训练模型的分布式学习方法。由于本地数据保留在设备上,因此减少了通过网络的通信。然而,在大规模物联网环境或资源有限的网络中,典型的 FL 方法由于通信时间较长,性能大打折扣。在本文中,我们提出了两种在资源受限的 FL 部署中进一步减少通信量的方法。第一种方法被称为 "选择性更新"(Selective Updates,SU),在向中央参数服务器(Parameter Server,PS)发送更新之前,先对本地模型进行训练,直到超过模型性能的动态阈值为止。这样可以尽量减少更新的传输,从而降低通信开销。我们的第二种方法是自适应屏蔽(AM),在共享之前对全局和本地模型进行参数屏蔽。通过 AM,我们会选择在两轮训练之间变化最大的模型参数。我们使用两个常见的基准数据集,在不同的通信限制设置下,对照最先进的通信减少策略,对我们提出的方法进行了广泛评估。在不影响模型准确性的情况下,我们提出的方法将总体通信量减少了 20% 以上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Green Communications and Networking
IEEE Transactions on Green Communications and Networking Computer Science-Computer Networks and Communications
CiteScore
9.30
自引率
6.20%
发文量
181
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信