{"title":"A Comprehensive Reinforcement Learning Framework for Priority-Aware Data Center Scheduling Optimization and QOS-Defined Buffer Management","authors":"Vinu Josephraj, Wilfred Franklin Sundara Raj","doi":"10.1002/cpe.70028","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>A network architecture's fundamental components include buffering designs and policies for their effective administration. Strong incentives exist to test and implement new regulations, but there are few opportunities to alter much more than minor details. We describe Open Queue, a new specification language that enables the expression of management rules and virtual buffering architectures that represent a broad range of economic models. Open Queue provides various comparators and basic functions that make it easy for users to create whole buffering structures and policies. It provides examples of Open Queue buffer management strategies and provides empirical evidence of how they affect performance in different scenarios. Through all of these efforts, minimizing network usage, avoiding network congestion, which ensures QoS (Quality of Service), and making the most use of the current route is regarded as the main problems. Common traffic engineering methods like Equal Cost Multipath (ECMP) don't address the state of the network at the moment or offer a mouse flow solution. The proposed solution to this issue is the implementation of a Deep Reinforcement Learning (DRL) based Priority-Aware Data Center Scheduling Algorithm (PADCS), which leverages AHP-TOPSIS to update previous experience using prioritized experiences replay, monitor workload categorization, and receive real-time environmental feedback. Finally, the suggested algorithm determines the optimal route through the network for every flow depending on the kind of current flows in order to increase customer happiness and improve QoS. The evaluation findings show that, under various traffic scenarios, the DRL-PADCS algorithm lowers average throughput and normalized total throughput, link usage, average round-trip time, and packet loss rate in comparison to ECMP.</p>\n </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 4-5","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurrency and Computation-Practice & Experience","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cpe.70028","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
A network architecture's fundamental components include buffering designs and policies for their effective administration. Strong incentives exist to test and implement new regulations, but there are few opportunities to alter much more than minor details. We describe Open Queue, a new specification language that enables the expression of management rules and virtual buffering architectures that represent a broad range of economic models. Open Queue provides various comparators and basic functions that make it easy for users to create whole buffering structures and policies. It provides examples of Open Queue buffer management strategies and provides empirical evidence of how they affect performance in different scenarios. Through all of these efforts, minimizing network usage, avoiding network congestion, which ensures QoS (Quality of Service), and making the most use of the current route is regarded as the main problems. Common traffic engineering methods like Equal Cost Multipath (ECMP) don't address the state of the network at the moment or offer a mouse flow solution. The proposed solution to this issue is the implementation of a Deep Reinforcement Learning (DRL) based Priority-Aware Data Center Scheduling Algorithm (PADCS), which leverages AHP-TOPSIS to update previous experience using prioritized experiences replay, monitor workload categorization, and receive real-time environmental feedback. Finally, the suggested algorithm determines the optimal route through the network for every flow depending on the kind of current flows in order to increase customer happiness and improve QoS. The evaluation findings show that, under various traffic scenarios, the DRL-PADCS algorithm lowers average throughput and normalized total throughput, link usage, average round-trip time, and packet loss rate in comparison to ECMP.
期刊介绍:
Concurrency and Computation: Practice and Experience (CCPE) publishes high-quality, original research papers, and authoritative research review papers, in the overlapping fields of:
Parallel and distributed computing;
High-performance computing;
Computational and data science;
Artificial intelligence and machine learning;
Big data applications, algorithms, and systems;
Network science;
Ontologies and semantics;
Security and privacy;
Cloud/edge/fog computing;
Green computing; and
Quantum computing.