非iid 6G Edge-RAN中的熵驱动随机联邦学习

Brahim Aamer, Hatim Chergui, M. Benjillali, C. Verikoukis
{"title":"非iid 6G Edge-RAN中的熵驱动随机联邦学习","authors":"Brahim Aamer, Hatim Chergui, M. Benjillali, C. Verikoukis","doi":"10.3389/frcmn.2021.739414","DOIUrl":null,"url":null,"abstract":"Scalable and sustainable AI-driven analytics are necessary to enable large-scale and heterogeneous service deployment in sixth-generation (6G) ultra-dense networks. This implies that the exchange of raw monitoring data should be minimized across the network by bringing the analysis functions closer to the data collection points. While federated learning (FL) is an efficient tool to implement such a decentralized strategy, real networks are generally characterized by time- and space-varying traffic patterns and channel conditions, making thereby the data collected in different points non independent and identically distributed (non-IID), which is challenging for FL. To sidestep this issue, we first introduce a new a priori metric that we call dataset entropy, whose role is to capture the distribution, the quantity of information, the unbalanced structure and the “non-IIDness” of a dataset independently of the models. This a priori entropy is calculated using a multi-dimensional spectral clustering scheme over both the features and the supervised output spaces, and is suitable for classification as well as regression tasks. The FL aggregation operations support system (OSS) server then uses the reported dataset entropies to devise 1) an entropy-based federated averaging scheme, and 2) a stochastic participant selection policy to significantly stabilize the training, minimize the convergence time, and reduce the corresponding computation cost. Numerical results are provided to show the superiority of these novel approaches.","PeriodicalId":106247,"journal":{"name":"Frontiers in Communications and Networks","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Entropy-Driven Stochastic Federated Learning in Non-IID 6G Edge-RAN\",\"authors\":\"Brahim Aamer, Hatim Chergui, M. Benjillali, C. Verikoukis\",\"doi\":\"10.3389/frcmn.2021.739414\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Scalable and sustainable AI-driven analytics are necessary to enable large-scale and heterogeneous service deployment in sixth-generation (6G) ultra-dense networks. This implies that the exchange of raw monitoring data should be minimized across the network by bringing the analysis functions closer to the data collection points. While federated learning (FL) is an efficient tool to implement such a decentralized strategy, real networks are generally characterized by time- and space-varying traffic patterns and channel conditions, making thereby the data collected in different points non independent and identically distributed (non-IID), which is challenging for FL. To sidestep this issue, we first introduce a new a priori metric that we call dataset entropy, whose role is to capture the distribution, the quantity of information, the unbalanced structure and the “non-IIDness” of a dataset independently of the models. This a priori entropy is calculated using a multi-dimensional spectral clustering scheme over both the features and the supervised output spaces, and is suitable for classification as well as regression tasks. The FL aggregation operations support system (OSS) server then uses the reported dataset entropies to devise 1) an entropy-based federated averaging scheme, and 2) a stochastic participant selection policy to significantly stabilize the training, minimize the convergence time, and reduce the corresponding computation cost. Numerical results are provided to show the superiority of these novel approaches.\",\"PeriodicalId\":106247,\"journal\":{\"name\":\"Frontiers in Communications and Networks\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in Communications and Networks\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3389/frcmn.2021.739414\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Communications and Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frcmn.2021.739414","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

可扩展和可持续的人工智能驱动分析对于在第六代(6G)超密集网络中实现大规模和异构服务部署是必要的。这意味着应该通过使分析功能更接近数据收集点来最小化整个网络中原始监视数据的交换。虽然联邦学习(FL)是实现这种分散策略的有效工具,但实际网络通常具有时变和空间变化的流量模式和信道条件的特征,从而使得在不同点收集的数据非独立和同分布(non- iid),这对FL来说是一个挑战。为了避开这个问题,我们首先引入一个新的先验度量,我们称之为数据集熵,其作用是捕获分布。独立于模型的数据集的信息量、不平衡结构和“非idness”。该先验熵是使用特征和监督输出空间上的多维谱聚类方案计算的,适用于分类和回归任务。然后,FL聚合操作支持系统(OSS)服务器利用报告的数据集熵设计1)基于熵的联邦平均方案,2)随机参与者选择策略,从而显著稳定训练,最小化收敛时间,降低相应的计算成本。数值结果表明了这些新方法的优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Entropy-Driven Stochastic Federated Learning in Non-IID 6G Edge-RAN
Scalable and sustainable AI-driven analytics are necessary to enable large-scale and heterogeneous service deployment in sixth-generation (6G) ultra-dense networks. This implies that the exchange of raw monitoring data should be minimized across the network by bringing the analysis functions closer to the data collection points. While federated learning (FL) is an efficient tool to implement such a decentralized strategy, real networks are generally characterized by time- and space-varying traffic patterns and channel conditions, making thereby the data collected in different points non independent and identically distributed (non-IID), which is challenging for FL. To sidestep this issue, we first introduce a new a priori metric that we call dataset entropy, whose role is to capture the distribution, the quantity of information, the unbalanced structure and the “non-IIDness” of a dataset independently of the models. This a priori entropy is calculated using a multi-dimensional spectral clustering scheme over both the features and the supervised output spaces, and is suitable for classification as well as regression tasks. The FL aggregation operations support system (OSS) server then uses the reported dataset entropies to devise 1) an entropy-based federated averaging scheme, and 2) a stochastic participant selection policy to significantly stabilize the training, minimize the convergence time, and reduce the corresponding computation cost. Numerical results are provided to show the superiority of these novel approaches.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.90
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信