Adrian Martin;Isabel de-la-Bandera;Adriano Mendo;Jose Outes;Juan Ramiro;Raquel Barco
{"title":"ENDC优化的联邦深度强化学习","authors":"Adrian Martin;Isabel de-la-Bandera;Adriano Mendo;Jose Outes;Juan Ramiro;Raquel Barco","doi":"10.1109/TMC.2025.3534661","DOIUrl":null,"url":null,"abstract":"5G New Radio (NR) network deployment in Non-Stand Alone (NSA) mode means that 5G networks rely on the control plane of existing Long Term Evolution (LTE) modules for control functions, while 5G modules are only dedicated to the user plane tasks, which could also be carried out by LTE modules simultaneously. The first deployments of 5G networks are essentially using this technology. These deployments enable what is known as E-UTRAN NR Dual Connectivity (ENDC), where a user establish a 5G connection simultaneously with a pre-existing LTE connection to boost their data rate. In this paper, a single Federated Deep Reinforcement Learning (FDRL) agent for the optimization of the event that triggers the dual connectivity between LTE and 5G is proposed. First, single Deep Reinforcement Learning (DRL) agents are trained in isolated cells. Later, these agents are merged into a unique global agent capable of optimizing the whole network with Federated Learning (FL). This scheme of training single agents and merging them also makes feasible the use of dynamic simulators for this type of learning algorithm and parameters related to mobility, by drastically reducing the number of possible combinations resulting in fewer simulations. The simulation results show that the final agent is capable of achieving a tradeoff between dropped calls and the user throughput to achieve global optimum without the need for interacting with all the cells for training.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 6","pages":"5525-5535"},"PeriodicalIF":7.7000,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10854899","citationCount":"0","resultStr":"{\"title\":\"Federated Deep Reinforcement Learning for ENDC Optimization\",\"authors\":\"Adrian Martin;Isabel de-la-Bandera;Adriano Mendo;Jose Outes;Juan Ramiro;Raquel Barco\",\"doi\":\"10.1109/TMC.2025.3534661\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"5G New Radio (NR) network deployment in Non-Stand Alone (NSA) mode means that 5G networks rely on the control plane of existing Long Term Evolution (LTE) modules for control functions, while 5G modules are only dedicated to the user plane tasks, which could also be carried out by LTE modules simultaneously. The first deployments of 5G networks are essentially using this technology. These deployments enable what is known as E-UTRAN NR Dual Connectivity (ENDC), where a user establish a 5G connection simultaneously with a pre-existing LTE connection to boost their data rate. In this paper, a single Federated Deep Reinforcement Learning (FDRL) agent for the optimization of the event that triggers the dual connectivity between LTE and 5G is proposed. First, single Deep Reinforcement Learning (DRL) agents are trained in isolated cells. Later, these agents are merged into a unique global agent capable of optimizing the whole network with Federated Learning (FL). This scheme of training single agents and merging them also makes feasible the use of dynamic simulators for this type of learning algorithm and parameters related to mobility, by drastically reducing the number of possible combinations resulting in fewer simulations. The simulation results show that the final agent is capable of achieving a tradeoff between dropped calls and the user throughput to achieve global optimum without the need for interacting with all the cells for training.\",\"PeriodicalId\":50389,\"journal\":{\"name\":\"IEEE Transactions on Mobile Computing\",\"volume\":\"24 6\",\"pages\":\"5525-5535\"},\"PeriodicalIF\":7.7000,\"publicationDate\":\"2025-01-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10854899\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Mobile Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10854899/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Mobile Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10854899/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
摘要
NSA (Non-Stand Alone)模式下的5G NR (New Radio)网络部署是指5G网络依靠现有LTE (Long Term Evolution)模块的控制平面完成控制功能,而5G模块仅专用于用户平面任务,也可以由LTE模块同时完成。5G网络的首次部署基本上就是在使用这项技术。这些部署实现了所谓的E-UTRAN NR双连接(ENDC),即用户与已有的LTE连接同时建立5G连接,以提高其数据速率。本文提出了一个单一的联邦深度强化学习(FDRL)代理,用于优化触发LTE和5G之间双重连接的事件。首先,在孤立的细胞中训练单个深度强化学习(DRL)代理。然后,这些代理被合并成一个独特的全局代理,能够通过联邦学习(FL)优化整个网络。这种训练单个智能体并合并它们的方案也使得动态模拟器的使用成为可能,这种类型的学习算法和与移动性相关的参数,通过大幅减少可能的组合数量导致更少的模拟。仿真结果表明,最终智能体能够在不需要与所有单元交互进行训练的情况下,在掉话和用户吞吐量之间实现折衷,从而达到全局最优。
Federated Deep Reinforcement Learning for ENDC Optimization
5G New Radio (NR) network deployment in Non-Stand Alone (NSA) mode means that 5G networks rely on the control plane of existing Long Term Evolution (LTE) modules for control functions, while 5G modules are only dedicated to the user plane tasks, which could also be carried out by LTE modules simultaneously. The first deployments of 5G networks are essentially using this technology. These deployments enable what is known as E-UTRAN NR Dual Connectivity (ENDC), where a user establish a 5G connection simultaneously with a pre-existing LTE connection to boost their data rate. In this paper, a single Federated Deep Reinforcement Learning (FDRL) agent for the optimization of the event that triggers the dual connectivity between LTE and 5G is proposed. First, single Deep Reinforcement Learning (DRL) agents are trained in isolated cells. Later, these agents are merged into a unique global agent capable of optimizing the whole network with Federated Learning (FL). This scheme of training single agents and merging them also makes feasible the use of dynamic simulators for this type of learning algorithm and parameters related to mobility, by drastically reducing the number of possible combinations resulting in fewer simulations. The simulation results show that the final agent is capable of achieving a tradeoff between dropped calls and the user throughput to achieve global optimum without the need for interacting with all the cells for training.
期刊介绍:
IEEE Transactions on Mobile Computing addresses key technical issues related to various aspects of mobile computing. This includes (a) architectures, (b) support services, (c) algorithm/protocol design and analysis, (d) mobile environments, (e) mobile communication systems, (f) applications, and (g) emerging technologies. Topics of interest span a wide range, covering aspects like mobile networks and hosts, mobility management, multimedia, operating system support, power management, online and mobile environments, security, scalability, reliability, and emerging technologies such as wearable computers, body area networks, and wireless sensor networks. The journal serves as a comprehensive platform for advancements in mobile computing research.