{"title":"基于雾的光网络实时物联网智能动态带宽分配","authors":"Mohannad Alhafnawi , Ashraf Abu-Ein , Haythem Bany Salameh , Yaser Jararweh , Obaida Al-Hazaimeh","doi":"10.1016/j.simpat.2025.103126","DOIUrl":null,"url":null,"abstract":"<div><div>The rapid proliferation of Internet of Things (IoT) applications has led to an exponential increase in data traffic, which is imposing a significant burden on existing network infrastructures. Ethernet Passive Optical Networks (EPONs) within fog computing environments are critical for supporting real-time IoT applications because of their high bandwidth and low-latency capabilities. However, traditional static bandwidth allocation methods designed for EPONs do not accommodate the dynamic and heterogeneous nature of real-time IoT traffic, leading to inefficient resource utilization and service degradation. This paper proposes a reinforcement learning (RL)-driven fog-based dynamic bandwidth allocation (DBA) framework to optimize resource management in EPONs with multi-optical edge devices (OEDs) within cloud-fog computing environments. The framework improves bandwidth distribution for OEDs connected to IoT access points through a fog server, ensuring adaptive real-time dynamic bandwidth allocation. Specifically, this paper formulates the bandwidth allocation problem as a constrained profit-maximization discounted return problem, which cannot be directly solved using traditional optimization methods due to the dynamic and uncertain nature of the IoT traffic. To effectively solve this problem, this paper models the bandwidth allocation problem as a Markov Decision Process (MDP), which can be solved using RL without requiring prior traffic knowledge. Through a continuous real-time learning process, the proposed algorithm learns the IoT traffic dynamics and then adaptively optimizes bandwidth allocation at the fog layer, resulting in improved system efficiency and adaptability. Simulation results demonstrate that, compared to conventional bandwidth allocation algorithms designed for fog-based EPONs (i.e., equal distribution and moving average algorithms), the proposed RL-based framework significantly improves bandwidth utilization and reduces blocking probability related to IoT networks by dynamically adjusting bandwidth allocation based on the learned real-time traffic.</div></div>","PeriodicalId":49518,"journal":{"name":"Simulation Modelling Practice and Theory","volume":"142 ","pages":"Article 103126"},"PeriodicalIF":3.5000,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Intelligent dynamic bandwidth allocation for real-time IoT in fog-based optical networks\",\"authors\":\"Mohannad Alhafnawi , Ashraf Abu-Ein , Haythem Bany Salameh , Yaser Jararweh , Obaida Al-Hazaimeh\",\"doi\":\"10.1016/j.simpat.2025.103126\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The rapid proliferation of Internet of Things (IoT) applications has led to an exponential increase in data traffic, which is imposing a significant burden on existing network infrastructures. Ethernet Passive Optical Networks (EPONs) within fog computing environments are critical for supporting real-time IoT applications because of their high bandwidth and low-latency capabilities. However, traditional static bandwidth allocation methods designed for EPONs do not accommodate the dynamic and heterogeneous nature of real-time IoT traffic, leading to inefficient resource utilization and service degradation. This paper proposes a reinforcement learning (RL)-driven fog-based dynamic bandwidth allocation (DBA) framework to optimize resource management in EPONs with multi-optical edge devices (OEDs) within cloud-fog computing environments. The framework improves bandwidth distribution for OEDs connected to IoT access points through a fog server, ensuring adaptive real-time dynamic bandwidth allocation. Specifically, this paper formulates the bandwidth allocation problem as a constrained profit-maximization discounted return problem, which cannot be directly solved using traditional optimization methods due to the dynamic and uncertain nature of the IoT traffic. To effectively solve this problem, this paper models the bandwidth allocation problem as a Markov Decision Process (MDP), which can be solved using RL without requiring prior traffic knowledge. Through a continuous real-time learning process, the proposed algorithm learns the IoT traffic dynamics and then adaptively optimizes bandwidth allocation at the fog layer, resulting in improved system efficiency and adaptability. Simulation results demonstrate that, compared to conventional bandwidth allocation algorithms designed for fog-based EPONs (i.e., equal distribution and moving average algorithms), the proposed RL-based framework significantly improves bandwidth utilization and reduces blocking probability related to IoT networks by dynamically adjusting bandwidth allocation based on the learned real-time traffic.</div></div>\",\"PeriodicalId\":49518,\"journal\":{\"name\":\"Simulation Modelling Practice and Theory\",\"volume\":\"142 \",\"pages\":\"Article 103126\"},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2025-04-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Simulation Modelling Practice and Theory\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1569190X25000619\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Simulation Modelling Practice and Theory","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1569190X25000619","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
Intelligent dynamic bandwidth allocation for real-time IoT in fog-based optical networks
The rapid proliferation of Internet of Things (IoT) applications has led to an exponential increase in data traffic, which is imposing a significant burden on existing network infrastructures. Ethernet Passive Optical Networks (EPONs) within fog computing environments are critical for supporting real-time IoT applications because of their high bandwidth and low-latency capabilities. However, traditional static bandwidth allocation methods designed for EPONs do not accommodate the dynamic and heterogeneous nature of real-time IoT traffic, leading to inefficient resource utilization and service degradation. This paper proposes a reinforcement learning (RL)-driven fog-based dynamic bandwidth allocation (DBA) framework to optimize resource management in EPONs with multi-optical edge devices (OEDs) within cloud-fog computing environments. The framework improves bandwidth distribution for OEDs connected to IoT access points through a fog server, ensuring adaptive real-time dynamic bandwidth allocation. Specifically, this paper formulates the bandwidth allocation problem as a constrained profit-maximization discounted return problem, which cannot be directly solved using traditional optimization methods due to the dynamic and uncertain nature of the IoT traffic. To effectively solve this problem, this paper models the bandwidth allocation problem as a Markov Decision Process (MDP), which can be solved using RL without requiring prior traffic knowledge. Through a continuous real-time learning process, the proposed algorithm learns the IoT traffic dynamics and then adaptively optimizes bandwidth allocation at the fog layer, resulting in improved system efficiency and adaptability. Simulation results demonstrate that, compared to conventional bandwidth allocation algorithms designed for fog-based EPONs (i.e., equal distribution and moving average algorithms), the proposed RL-based framework significantly improves bandwidth utilization and reduces blocking probability related to IoT networks by dynamically adjusting bandwidth allocation based on the learned real-time traffic.
期刊介绍:
The journal Simulation Modelling Practice and Theory provides a forum for original, high-quality papers dealing with any aspect of systems simulation and modelling.
The journal aims at being a reference and a powerful tool to all those professionally active and/or interested in the methods and applications of simulation. Submitted papers will be peer reviewed and must significantly contribute to modelling and simulation in general or use modelling and simulation in application areas.
Paper submission is solicited on:
• theoretical aspects of modelling and simulation including formal modelling, model-checking, random number generators, sensitivity analysis, variance reduction techniques, experimental design, meta-modelling, methods and algorithms for validation and verification, selection and comparison procedures etc.;
• methodology and application of modelling and simulation in any area, including computer systems, networks, real-time and embedded systems, mobile and intelligent agents, manufacturing and transportation systems, management, engineering, biomedical engineering, economics, ecology and environment, education, transaction handling, etc.;
• simulation languages and environments including those, specific to distributed computing, grid computing, high performance computers or computer networks, etc.;
• distributed and real-time simulation, simulation interoperability;
• tools for high performance computing simulation, including dedicated architectures and parallel computing.