{"title":"Degree-aware In-network Aggregation for Federated Learning with Fog Computing","authors":"Wan-Ting Ho, S. Fang, Tingfeng Liu, Jian-Jhih Kuo","doi":"10.1109/GCWkshps52748.2021.9682059","DOIUrl":null,"url":null,"abstract":"Data privacy preservation has drawn much attention in emerging machine learning applications, and thus collaborative training is getting much higher such as Federated Learning (FL). However, FL requires a central server to aggregate local models trained by different users. Thus, the central server may become a crucial network bottleneck and limit scalability. To remedy this issue, a novel Fog Computing (FC)-based FL is presented to locally train the model and cooperate to accomplish in-network aggregation to prevent overwhelm the central server. Then, the paper formulates a new optimization problem termed DAT to minimize the total communication cost and maximum latency jointly. We first prove the hardness and propose two efficient algorithms, ADAT-C and ADAT, for the special and general cases, respectively. Simulation and experiment results manifest that our algorithms at least outperform 30% of communication cost compared with other heuristics without sacrificing the convergence rate.","PeriodicalId":6802,"journal":{"name":"2021 IEEE Globecom Workshops (GC Wkshps)","volume":"8 1","pages":"1-6"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Globecom Workshops (GC Wkshps)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GCWkshps52748.2021.9682059","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Data privacy preservation has drawn much attention in emerging machine learning applications, and thus collaborative training is getting much higher such as Federated Learning (FL). However, FL requires a central server to aggregate local models trained by different users. Thus, the central server may become a crucial network bottleneck and limit scalability. To remedy this issue, a novel Fog Computing (FC)-based FL is presented to locally train the model and cooperate to accomplish in-network aggregation to prevent overwhelm the central server. Then, the paper formulates a new optimization problem termed DAT to minimize the total communication cost and maximum latency jointly. We first prove the hardness and propose two efficient algorithms, ADAT-C and ADAT, for the special and general cases, respectively. Simulation and experiment results manifest that our algorithms at least outperform 30% of communication cost compared with other heuristics without sacrificing the convergence rate.