Haihui Xie, M. Xia, Peiran Wu, Shuai Wang, Kaibin Huang
{"title":"异步参数共享的分散联邦学习","authors":"Haihui Xie, M. Xia, Peiran Wu, Shuai Wang, Kaibin Huang","doi":"10.1109/ICCCWorkshops57813.2023.10233712","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) enables wireless terminals to collaboratively learn a shared parameter model while keeping all the training data on devices per se. Whatever parameter sharing is applied, the learning model shall adapt to distinct network architectures because an improper learning model will deteriorate learning performance and, even worse, lead to model divergence, especially for the asynchronous transmission in resource-limited distributed networks. To address this issue, this paper proposes a decentralized learning model and develops an asynchronous parameter-sharing algorithm for resource-limited distributed Internet of Things (IoT) networks. It can improve learning efficiency and realize efficient communication. By jointly accounting for the convergence bound of federated learning and the transmission delay of wireless communications, we develop a node scheduling and bandwidth allocation algorithm to improve the learning performance. Extensive simulation results corroborate the effectiveness of the distributed algorithm in terms of fast learning model convergence and low transmission delay.","PeriodicalId":201450,"journal":{"name":"2023 IEEE/CIC International Conference on Communications in China (ICCC Workshops)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Decentralized Federated Learning With Asynchronous Parameter Sharing\",\"authors\":\"Haihui Xie, M. Xia, Peiran Wu, Shuai Wang, Kaibin Huang\",\"doi\":\"10.1109/ICCCWorkshops57813.2023.10233712\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning (FL) enables wireless terminals to collaboratively learn a shared parameter model while keeping all the training data on devices per se. Whatever parameter sharing is applied, the learning model shall adapt to distinct network architectures because an improper learning model will deteriorate learning performance and, even worse, lead to model divergence, especially for the asynchronous transmission in resource-limited distributed networks. To address this issue, this paper proposes a decentralized learning model and develops an asynchronous parameter-sharing algorithm for resource-limited distributed Internet of Things (IoT) networks. It can improve learning efficiency and realize efficient communication. By jointly accounting for the convergence bound of federated learning and the transmission delay of wireless communications, we develop a node scheduling and bandwidth allocation algorithm to improve the learning performance. Extensive simulation results corroborate the effectiveness of the distributed algorithm in terms of fast learning model convergence and low transmission delay.\",\"PeriodicalId\":201450,\"journal\":{\"name\":\"2023 IEEE/CIC International Conference on Communications in China (ICCC Workshops)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-08-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE/CIC International Conference on Communications in China (ICCC Workshops)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCCWorkshops57813.2023.10233712\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/CIC International Conference on Communications in China (ICCC Workshops)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCCWorkshops57813.2023.10233712","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Decentralized Federated Learning With Asynchronous Parameter Sharing
Federated learning (FL) enables wireless terminals to collaboratively learn a shared parameter model while keeping all the training data on devices per se. Whatever parameter sharing is applied, the learning model shall adapt to distinct network architectures because an improper learning model will deteriorate learning performance and, even worse, lead to model divergence, especially for the asynchronous transmission in resource-limited distributed networks. To address this issue, this paper proposes a decentralized learning model and develops an asynchronous parameter-sharing algorithm for resource-limited distributed Internet of Things (IoT) networks. It can improve learning efficiency and realize efficient communication. By jointly accounting for the convergence bound of federated learning and the transmission delay of wireless communications, we develop a node scheduling and bandwidth allocation algorithm to improve the learning performance. Extensive simulation results corroborate the effectiveness of the distributed algorithm in terms of fast learning model convergence and low transmission delay.