FLEE: A Hierarchical Federated Learning Framework for Distributed Deep Neural Network over Cloud, Edge, and End Device

Zhengyi Zhong, Weidong Bao, Ji Wang, Xiaomin Zhu, Xiongtao Zhang
{"title":"FLEE: A Hierarchical Federated Learning Framework for Distributed Deep Neural Network over Cloud, Edge, and End Device","authors":"Zhengyi Zhong, Weidong Bao, Ji Wang, Xiaomin Zhu, Xiongtao Zhang","doi":"10.1145/3514501","DOIUrl":null,"url":null,"abstract":"With the development of smart devices, the computing capabilities of portable end devices such as mobile phones have been greatly enhanced. Meanwhile, traditional cloud computing faces great challenges caused by privacy-leakage and time-delay problems, there is a trend to push models down to edges and end devices. However, due to the limitation of computing resource, it is difficult for end devices to complete complex computing tasks alone. Therefore, this article divides the model into two parts and deploys them on multiple end devices and edges, respectively. Meanwhile, an early exit is set to reduce computing resource overhead, forming a hierarchical distributed architecture. In order to enable the distributed model to continuously evolve by using new data generated by end devices, we comprehensively consider various data distributions on end devices and edges, proposing a hierarchical federated learning framework FLEE, which can realize dynamical updates of models without redeploying them. Through image and sentence classification experiments, we verify that it can improve model performances under all kinds of data distributions, and prove that compared with other frameworks, the models trained by FLEE consume less global computing resource in the inference stage.","PeriodicalId":123526,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology (TIST)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Intelligent Systems and Technology (TIST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3514501","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

Abstract

With the development of smart devices, the computing capabilities of portable end devices such as mobile phones have been greatly enhanced. Meanwhile, traditional cloud computing faces great challenges caused by privacy-leakage and time-delay problems, there is a trend to push models down to edges and end devices. However, due to the limitation of computing resource, it is difficult for end devices to complete complex computing tasks alone. Therefore, this article divides the model into two parts and deploys them on multiple end devices and edges, respectively. Meanwhile, an early exit is set to reduce computing resource overhead, forming a hierarchical distributed architecture. In order to enable the distributed model to continuously evolve by using new data generated by end devices, we comprehensively consider various data distributions on end devices and edges, proposing a hierarchical federated learning framework FLEE, which can realize dynamical updates of models without redeploying them. Through image and sentence classification experiments, we verify that it can improve model performances under all kinds of data distributions, and prove that compared with other frameworks, the models trained by FLEE consume less global computing resource in the inference stage.
基于云、边缘和终端设备的分布式深度神经网络分层联邦学习框架
随着智能设备的发展,移动电话等便携式终端设备的计算能力大大增强。同时,传统的云计算面临着隐私泄露和时延问题带来的巨大挑战,有将模型推向边缘和终端设备的趋势。然而,由于计算资源的限制,终端设备很难单独完成复杂的计算任务。因此,本文将模型分为两部分,分别部署在多个终端设备和边缘上。同时,设置提前退出,减少计算资源开销,形成分层分布式架构。为了利用终端设备产生的新数据使分布式模型不断进化,综合考虑终端设备和边缘上的各种数据分布,提出了一种分层联邦学习框架,该框架可以实现模型的动态更新,而无需重新部署模型。通过图像和句子分类实验,我们验证了它在各种数据分布下都能提高模型的性能,并证明了与其他框架相比,用escape训练的模型在推理阶段消耗的全局计算资源更少。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信