{"title":"Efficient Wireless Network Slicing in 5G Networks: An Asynchronous Federated Learning Approach","authors":"K. Letaief, Z. Fadlullah, M. Fouda","doi":"10.1109/IoTaIS56727.2022.9976007","DOIUrl":null,"url":null,"abstract":"While researchers continue to incorporate intelligent algorithms in Fifth Generation (5G) and beyond networks to achieve high-accuracy decisions with ultra-low latency and significantly high throughput, the issue of privacy-preservation became a critical research area. This is because mobile service providers not only need to satisfy the Quality of Service (QoS) of users in terms of ultra-fast user connectivity but also ensure reliable, automated solutions that will enable them to design a vast multi-tenant system on the same physical infrastructure while preserving the user privacy. With the adoption of data-driven machine learning models for providing smart network slicing in 5G and beyond networks and Internet of Things (IoT) systems, the issue of privacy-preservation integration is yet to be considered. We address this issue in this paper, and design an asynchronously weight updating federated learning framework that is efficient, reliable, and preserves the privacy as well as achieve the required low latency and low network overhead. Thus, our proposal permits a reasonably accurate decision for the resource allocation for different 5G users without violating their privacy or introducing additional load to the network. Experimental results demonstrate the efficiency of the asynchronously weight updating federated learning in contrast with the conventional FedAvg (Federated averaging) strategy and the traditional centralized learning model. In particular, our proposed technique achieves network overhead reduction with a consistent and significantly high prediction accuracy, that validates its low-latency and efficiency advantages.","PeriodicalId":138894,"journal":{"name":"2022 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IoTaIS56727.2022.9976007","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
While researchers continue to incorporate intelligent algorithms in Fifth Generation (5G) and beyond networks to achieve high-accuracy decisions with ultra-low latency and significantly high throughput, the issue of privacy-preservation became a critical research area. This is because mobile service providers not only need to satisfy the Quality of Service (QoS) of users in terms of ultra-fast user connectivity but also ensure reliable, automated solutions that will enable them to design a vast multi-tenant system on the same physical infrastructure while preserving the user privacy. With the adoption of data-driven machine learning models for providing smart network slicing in 5G and beyond networks and Internet of Things (IoT) systems, the issue of privacy-preservation integration is yet to be considered. We address this issue in this paper, and design an asynchronously weight updating federated learning framework that is efficient, reliable, and preserves the privacy as well as achieve the required low latency and low network overhead. Thus, our proposal permits a reasonably accurate decision for the resource allocation for different 5G users without violating their privacy or introducing additional load to the network. Experimental results demonstrate the efficiency of the asynchronously weight updating federated learning in contrast with the conventional FedAvg (Federated averaging) strategy and the traditional centralized learning model. In particular, our proposed technique achieves network overhead reduction with a consistent and significantly high prediction accuracy, that validates its low-latency and efficiency advantages.