H. Eddine Benmadani , M. Amine Ouamri , M. Azni , T. Essa Alharbi
{"title":"Dynamic eMBB scheduling strategy for GBR and NGBR in Non Standalone 5G NR: A deep reinforcement learning approach","authors":"H. Eddine Benmadani , M. Amine Ouamri , M. Azni , T. Essa Alharbi","doi":"10.1016/j.comnet.2025.111692","DOIUrl":null,"url":null,"abstract":"<div><div>With the emergence of 5G networks and network slicing concept, efficient resource management is crucial to meet varied Quality of Service (QoS) requirements. Intra-slice scheduling plays a central role in optimizing the network performance while catering to the requirements of different traffic flows within a slice. In this paper, we propose a Deep Reinforcement Learning (DRL)-based scheduling scheme for eMBB applications. This appraoch aims to maximize system throughput, increase GBR throughput, minimize packet loss by minimizing Head of Line (HoL) delay, and ensure NGBR flow fairness. To evaluate our approach, we test and contrast the two DRL methods, i.e., Deep Q-Network (DQN) and Proximal Policy Optimization (PPO). Using both approaches to fine-tune our hybrid scheduling metric, we demonstrate the adaptability and reliability of our approach with different learning frameworks. We contrast our DRL-based scheduler performance to the Proportional Fair (PF) scheduler and two QoS-aware schedulers, QoS and EXP-PF. Simulation shows that our scheme significantly improves the system throughput and maintains the GBR and NGBR traffic performance in balance. Moreover, the comparison of DQN and PPO provides novel insights into wireless scheduling efficacy with a foundation for future adaptive scheduling solutions in 5G and beyond.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"272 ","pages":"Article 111692"},"PeriodicalIF":4.6000,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389128625006590","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
With the emergence of 5G networks and network slicing concept, efficient resource management is crucial to meet varied Quality of Service (QoS) requirements. Intra-slice scheduling plays a central role in optimizing the network performance while catering to the requirements of different traffic flows within a slice. In this paper, we propose a Deep Reinforcement Learning (DRL)-based scheduling scheme for eMBB applications. This appraoch aims to maximize system throughput, increase GBR throughput, minimize packet loss by minimizing Head of Line (HoL) delay, and ensure NGBR flow fairness. To evaluate our approach, we test and contrast the two DRL methods, i.e., Deep Q-Network (DQN) and Proximal Policy Optimization (PPO). Using both approaches to fine-tune our hybrid scheduling metric, we demonstrate the adaptability and reliability of our approach with different learning frameworks. We contrast our DRL-based scheduler performance to the Proportional Fair (PF) scheduler and two QoS-aware schedulers, QoS and EXP-PF. Simulation shows that our scheme significantly improves the system throughput and maintains the GBR and NGBR traffic performance in balance. Moreover, the comparison of DQN and PPO provides novel insights into wireless scheduling efficacy with a foundation for future adaptive scheduling solutions in 5G and beyond.
随着5G网络和网络切片概念的出现,高效的资源管理对于满足各种服务质量(QoS)需求至关重要。片内调度在优化网络性能方面起着核心作用,同时可以满足片内不同流量的需求。本文提出了一种基于深度强化学习(DRL)的eMBB调度方案。该方法旨在最大限度地提高系统吞吐量,增加GBR吞吐量,通过最小化HoL (Head of Line)延迟来减少丢包,并保证NGBR流量的公平性。为了评估我们的方法,我们测试并对比了两种DRL方法,即深度q -网络(DQN)和近端策略优化(PPO)。使用这两种方法来微调我们的混合调度度量,我们证明了我们的方法在不同学习框架下的适应性和可靠性。我们将基于drl的调度器性能与比例公平(PF)调度器和两个QoS感知调度器QoS和EXP-PF进行了对比。仿真结果表明,该方案在保证GBR和NGBR流量性能平衡的前提下,显著提高了系统吞吐量。此外,DQN和PPO的比较为无线调度效率提供了新的见解,为未来5G及以后的自适应调度解决方案奠定了基础。
期刊介绍:
Computer Networks is an international, archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in the computer communications networking area. The audience includes researchers, managers and operators of networks as well as designers and implementors. The Editorial Board will consider any material for publication that is of interest to those groups.