{"title":"Accelerating federated learning based on grouping aggregation in heterogeneous edge computing","authors":"Longbo Li, C. Li","doi":"10.1145/3529466.3529505","DOIUrl":null,"url":null,"abstract":"Recently, edge devices such as mobile phones and smartwatches have become part of modern distributed systems, federated learning is an effectively distributed learning paradigm that can leverage these edge devices to collaboratively train models without sharing raw data. In federated learning, the device periodically downloads the model from the server, uses the local data for training, and uploads it to the server, while the servers aggregates params uploaded to update the global model. However, different devices are located in different network environments and have different communication and computation capability. Therefore, the model training speed depends on the slowest device, and the system between devices is heterogeneous. To effectively address these problems, we propose to group the devices, firstly use the synchronous method to aggregate model updates within a group, then aggregate updates between groups in an asynchronous way, and propose an algorithm based on weight update to aggregate models. We conduct extensive simulations on our proposed algorithms, and the results show that they can dramatically accelerate model training while achieving high accuracy.","PeriodicalId":375562,"journal":{"name":"Proceedings of the 2022 6th International Conference on Innovation in Artificial Intelligence","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 6th International Conference on Innovation in Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3529466.3529505","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recently, edge devices such as mobile phones and smartwatches have become part of modern distributed systems, federated learning is an effectively distributed learning paradigm that can leverage these edge devices to collaboratively train models without sharing raw data. In federated learning, the device periodically downloads the model from the server, uses the local data for training, and uploads it to the server, while the servers aggregates params uploaded to update the global model. However, different devices are located in different network environments and have different communication and computation capability. Therefore, the model training speed depends on the slowest device, and the system between devices is heterogeneous. To effectively address these problems, we propose to group the devices, firstly use the synchronous method to aggregate model updates within a group, then aggregate updates between groups in an asynchronous way, and propose an algorithm based on weight update to aggregate models. We conduct extensive simulations on our proposed algorithms, and the results show that they can dramatically accelerate model training while achieving high accuracy.