6G无线网络中一种新的分层分散联邦学习框架

J. Zhang, Li Chen, Xiaohui Chen, Guo Wei
{"title":"6G无线网络中一种新的分层分散联邦学习框架","authors":"J. Zhang, Li Chen, Xiaohui Chen, Guo Wei","doi":"10.1109/INFOCOMWKSHPS57453.2023.10226164","DOIUrl":null,"url":null,"abstract":"Decentralized federated learning (DFL) architecture enables clients to collaboratively train a shared machine learning model without a central parameter server. However, it is difficult to apply in multicell scenarios. In this paper, we propose an integrated hierarchically decentralized federated learning (HDFL) framework, where devices from different cells collaboratively train a global model under periodically intra-cell D2D consensus and inter-cell aggregation. We establish strong convergence guarantees for the proposed HDFL algorithm without assuming convex objectives. The convergence rate of HDFL can be optimized to achieve the balance of model accuracy and communication overhead. To improve the wireless performance of HDFL, we formulate an optimization problem to minimize the training latency and energy overhead. Numerical results based on the CIFAR-10 dataset validate the superiority of HDFL over traditional DFL methods in the multicell scenario.","PeriodicalId":354290,"journal":{"name":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","volume":"26 13","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Novel Hierarchically Decentralized Federated Learning Framework in 6G Wireless Networks\",\"authors\":\"J. Zhang, Li Chen, Xiaohui Chen, Guo Wei\",\"doi\":\"10.1109/INFOCOMWKSHPS57453.2023.10226164\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Decentralized federated learning (DFL) architecture enables clients to collaboratively train a shared machine learning model without a central parameter server. However, it is difficult to apply in multicell scenarios. In this paper, we propose an integrated hierarchically decentralized federated learning (HDFL) framework, where devices from different cells collaboratively train a global model under periodically intra-cell D2D consensus and inter-cell aggregation. We establish strong convergence guarantees for the proposed HDFL algorithm without assuming convex objectives. The convergence rate of HDFL can be optimized to achieve the balance of model accuracy and communication overhead. To improve the wireless performance of HDFL, we formulate an optimization problem to minimize the training latency and energy overhead. Numerical results based on the CIFAR-10 dataset validate the superiority of HDFL over traditional DFL methods in the multicell scenario.\",\"PeriodicalId\":354290,\"journal\":{\"name\":\"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)\",\"volume\":\"26 13\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/INFOCOMWKSHPS57453.2023.10226164\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFOCOMWKSHPS57453.2023.10226164","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

分散式联邦学习(DFL)架构使客户端能够在没有中心参数服务器的情况下协作训练共享机器学习模型。然而,它很难应用于多细胞场景。在本文中,我们提出了一个集成的分层分散联邦学习(HDFL)框架,其中来自不同细胞的设备在周期性的细胞内D2D共识和细胞间聚集下协同训练全局模型。我们在不假设凸目标的情况下,为所提出的HDFL算法建立了强收敛保证。可以优化HDFL的收敛速度,以达到模型精度和通信开销的平衡。为了提高HDFL的无线性能,我们制定了一个优化问题,以最小化训练延迟和能量开销。基于CIFAR-10数据集的数值结果验证了HDFL在多细胞场景下优于传统DFL方法的优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Novel Hierarchically Decentralized Federated Learning Framework in 6G Wireless Networks
Decentralized federated learning (DFL) architecture enables clients to collaboratively train a shared machine learning model without a central parameter server. However, it is difficult to apply in multicell scenarios. In this paper, we propose an integrated hierarchically decentralized federated learning (HDFL) framework, where devices from different cells collaboratively train a global model under periodically intra-cell D2D consensus and inter-cell aggregation. We establish strong convergence guarantees for the proposed HDFL algorithm without assuming convex objectives. The convergence rate of HDFL can be optimized to achieve the balance of model accuracy and communication overhead. To improve the wireless performance of HDFL, we formulate an optimization problem to minimize the training latency and energy overhead. Numerical results based on the CIFAR-10 dataset validate the superiority of HDFL over traditional DFL methods in the multicell scenario.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信