TEA-fed: time-efficient asynchronous federated learning for edge computing

Chen Zhou, Hao Tian, Hong Zhang, Jin Zhang, M. Dong, Juncheng Jia
{"title":"TEA-fed: time-efficient asynchronous federated learning for edge computing","authors":"Chen Zhou, Hao Tian, Hong Zhang, Jin Zhang, M. Dong, Juncheng Jia","doi":"10.1145/3457388.3458655","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) has attracted more and more attention recently. The integration of FL and edge computing makes the edge system more efficient and intelligent. FL usually uses the server to actively select certain edge devices to participate in the global model training. However, the selected edge devices may be stragglers, or even crash during training. Meanwhile, the unselected idle edge devices cannot be fully utilized for training. Therefore, besides the widely studied communication efficiency and data heterogeneity issues in FL, we also take the above time efficiency into consideration, and propose a time-efficient asynchronous federated learning protocol, TEA-Fed, to solve these problems. With TEA-Fed, idle edge devices actively apply for training tasks and participate in model training asynchronously once assigned tasks. Considering that there may be a huge number of edge devices in edge computing, we introduce control parameters to limit the number of devices participating in training the identical model at the same time. Meanwhile, we also introduce caching mechanism and weighted averaging with respect to model staleness in the model aggregation step to reduce the adverse effects of model staleness and further improve the accuracy of the global model. Finally, the experimental results show that the protocol can accelerate the convergence of model training, improve the accuracy, and has robustness to heterogeneous data.","PeriodicalId":136482,"journal":{"name":"Proceedings of the 18th ACM International Conference on Computing Frontiers","volume":"58 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 18th ACM International Conference on Computing Frontiers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3457388.3458655","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 15

Abstract

Federated learning (FL) has attracted more and more attention recently. The integration of FL and edge computing makes the edge system more efficient and intelligent. FL usually uses the server to actively select certain edge devices to participate in the global model training. However, the selected edge devices may be stragglers, or even crash during training. Meanwhile, the unselected idle edge devices cannot be fully utilized for training. Therefore, besides the widely studied communication efficiency and data heterogeneity issues in FL, we also take the above time efficiency into consideration, and propose a time-efficient asynchronous federated learning protocol, TEA-Fed, to solve these problems. With TEA-Fed, idle edge devices actively apply for training tasks and participate in model training asynchronously once assigned tasks. Considering that there may be a huge number of edge devices in edge computing, we introduce control parameters to limit the number of devices participating in training the identical model at the same time. Meanwhile, we also introduce caching mechanism and weighted averaging with respect to model staleness in the model aggregation step to reduce the adverse effects of model staleness and further improve the accuracy of the global model. Finally, the experimental results show that the protocol can accelerate the convergence of model training, improve the accuracy, and has robustness to heterogeneous data.
TEA-fed:用于边缘计算的高效异步联邦学习
FL和边缘计算的融合使边缘系统更加高效和智能。FL通常使用服务器主动选择某些边缘设备参与全局模型训练。然而,所选择的边缘设备可能会掉队,甚至在训练过程中崩溃。同时,未选择的闲置边缘设备不能被充分利用进行训练。因此,除了广泛研究的通信效率和数据异构问题外,我们还考虑了上述时间效率问题,并提出了一种时间效率高的异步联邦学习协议TEA-Fed来解决这些问题。使用TEA-Fed,空闲边缘设备一旦被分配任务,就会主动申请训练任务,并异步参与模型训练。考虑到边缘计算中可能存在大量的边缘设备,我们引入控制参数来限制同时参与训练同一模型的设备数量。同时,我们还在模型聚合步骤中引入了关于模型陈旧的缓存机制和加权平均,以减少模型陈旧的不利影响,进一步提高全局模型的准确性。实验结果表明,该协议能够加快模型训练的收敛速度,提高准确率,并对异构数据具有鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信