基于雾计算的联邦学习的度感知网络内聚合

Wan-Ting Ho, S. Fang, Tingfeng Liu, Jian-Jhih Kuo
{"title":"基于雾计算的联邦学习的度感知网络内聚合","authors":"Wan-Ting Ho, S. Fang, Tingfeng Liu, Jian-Jhih Kuo","doi":"10.1109/GCWkshps52748.2021.9682059","DOIUrl":null,"url":null,"abstract":"Data privacy preservation has drawn much attention in emerging machine learning applications, and thus collaborative training is getting much higher such as Federated Learning (FL). However, FL requires a central server to aggregate local models trained by different users. Thus, the central server may become a crucial network bottleneck and limit scalability. To remedy this issue, a novel Fog Computing (FC)-based FL is presented to locally train the model and cooperate to accomplish in-network aggregation to prevent overwhelm the central server. Then, the paper formulates a new optimization problem termed DAT to minimize the total communication cost and maximum latency jointly. We first prove the hardness and propose two efficient algorithms, ADAT-C and ADAT, for the special and general cases, respectively. Simulation and experiment results manifest that our algorithms at least outperform 30% of communication cost compared with other heuristics without sacrificing the convergence rate.","PeriodicalId":6802,"journal":{"name":"2021 IEEE Globecom Workshops (GC Wkshps)","volume":"8 1","pages":"1-6"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Degree-aware In-network Aggregation for Federated Learning with Fog Computing\",\"authors\":\"Wan-Ting Ho, S. Fang, Tingfeng Liu, Jian-Jhih Kuo\",\"doi\":\"10.1109/GCWkshps52748.2021.9682059\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Data privacy preservation has drawn much attention in emerging machine learning applications, and thus collaborative training is getting much higher such as Federated Learning (FL). However, FL requires a central server to aggregate local models trained by different users. Thus, the central server may become a crucial network bottleneck and limit scalability. To remedy this issue, a novel Fog Computing (FC)-based FL is presented to locally train the model and cooperate to accomplish in-network aggregation to prevent overwhelm the central server. Then, the paper formulates a new optimization problem termed DAT to minimize the total communication cost and maximum latency jointly. We first prove the hardness and propose two efficient algorithms, ADAT-C and ADAT, for the special and general cases, respectively. Simulation and experiment results manifest that our algorithms at least outperform 30% of communication cost compared with other heuristics without sacrificing the convergence rate.\",\"PeriodicalId\":6802,\"journal\":{\"name\":\"2021 IEEE Globecom Workshops (GC Wkshps)\",\"volume\":\"8 1\",\"pages\":\"1-6\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Globecom Workshops (GC Wkshps)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/GCWkshps52748.2021.9682059\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Globecom Workshops (GC Wkshps)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GCWkshps52748.2021.9682059","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在新兴的机器学习应用中,数据隐私保护受到了越来越多的关注,因此协同训练的要求越来越高,例如联邦学习(FL)。然而,FL需要一个中央服务器来聚合由不同用户训练的本地模型。因此,中央服务器可能成为一个关键的网络瓶颈,并限制了可伸缩性。为了解决这个问题,提出了一种新的基于雾计算(FC)的模型局部训练和协同完成网络内聚合,以防止中央服务器不堪重负。在此基础上,提出了一种新的优化问题DAT,使总通信成本和最大时延同时最小化。我们首先证明了该算法的硬度,并分别针对特殊情况和一般情况提出了两种有效的算法ADAT- c和ADAT。仿真和实验结果表明,在不牺牲收敛速度的情况下,我们的算法比其他启发式算法的通信成本至少高出30%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Degree-aware In-network Aggregation for Federated Learning with Fog Computing
Data privacy preservation has drawn much attention in emerging machine learning applications, and thus collaborative training is getting much higher such as Federated Learning (FL). However, FL requires a central server to aggregate local models trained by different users. Thus, the central server may become a crucial network bottleneck and limit scalability. To remedy this issue, a novel Fog Computing (FC)-based FL is presented to locally train the model and cooperate to accomplish in-network aggregation to prevent overwhelm the central server. Then, the paper formulates a new optimization problem termed DAT to minimize the total communication cost and maximum latency jointly. We first prove the hardness and propose two efficient algorithms, ADAT-C and ADAT, for the special and general cases, respectively. Simulation and experiment results manifest that our algorithms at least outperform 30% of communication cost compared with other heuristics without sacrificing the convergence rate.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信