{"title":"基于深度 Q-LSTM 模型的云网络工作调度,实现高效资源利用","authors":"Yanli Xing","doi":"10.1007/s10723-024-09746-6","DOIUrl":null,"url":null,"abstract":"<p>Edge computing has emerged as an innovative paradigm, bringing cloud service resources closer to mobile consumers at the network's edge. This proximity enables efficient processing of computationally demanding and time-sensitive tasks. However, the dynamic nature of the edge network, characterized by a high density of devices, diverse mobile usage patterns, a wide range of applications, and sporadic traffic, often leads to uneven resource distribution. This imbalance hampers system efficiency and contributes to task failures. To overcome these challenges, we propose a novel approach known as the DRL-LSTM approach, which combines Deep Reinforcement Learning (DRL) with Long Short-Term Memory (LSTM) architecture. The primary objective of the DRL-LSTM approach is to optimize workload planning in edge computing environments. Leveraging the capabilities of DRL, this approach effectively handles complex and multidimensional workload planning problems. By incorporating LSTM as a recurrent neural network, it captures and models temporal dependencies in sequential data, enabling efficient workload management, reduced service time, and enhanced task completion rates. Additionally, the DRL-LSTM approach integrates Deep-Q-Network (DQN) algorithms to address the complexity and high dimensionality of workload scheduling problems. Through simulations, we demonstrate that the DRL-LSTM approach outperforms alternative approaches regarding service time, virtual machine (VM) utilization, and the rate of failed tasks. The integration of DRL and LSTM enables the process to effectively tackle the challenges associated with workload planning in edge computing, leading to improved system performance. The proposed DRL-LSTM approach offers a promising solution for optimizing workload planning in edge computing environments. Combining the power of Deep Reinforcement Learning, Long Short-Term Memory architecture, and Deep-Q-Network algorithms facilitates efficient resource allocation, reduces service time, and increases task completion rates. It holds significant potential for enhancing the overall performance and effectiveness of edge computing systems.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"29 1","pages":""},"PeriodicalIF":3.6000,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Work Scheduling in Cloud Network Based on Deep Q-LSTM Models for Efficient Resource Utilization\",\"authors\":\"Yanli Xing\",\"doi\":\"10.1007/s10723-024-09746-6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Edge computing has emerged as an innovative paradigm, bringing cloud service resources closer to mobile consumers at the network's edge. This proximity enables efficient processing of computationally demanding and time-sensitive tasks. However, the dynamic nature of the edge network, characterized by a high density of devices, diverse mobile usage patterns, a wide range of applications, and sporadic traffic, often leads to uneven resource distribution. This imbalance hampers system efficiency and contributes to task failures. To overcome these challenges, we propose a novel approach known as the DRL-LSTM approach, which combines Deep Reinforcement Learning (DRL) with Long Short-Term Memory (LSTM) architecture. The primary objective of the DRL-LSTM approach is to optimize workload planning in edge computing environments. Leveraging the capabilities of DRL, this approach effectively handles complex and multidimensional workload planning problems. By incorporating LSTM as a recurrent neural network, it captures and models temporal dependencies in sequential data, enabling efficient workload management, reduced service time, and enhanced task completion rates. Additionally, the DRL-LSTM approach integrates Deep-Q-Network (DQN) algorithms to address the complexity and high dimensionality of workload scheduling problems. Through simulations, we demonstrate that the DRL-LSTM approach outperforms alternative approaches regarding service time, virtual machine (VM) utilization, and the rate of failed tasks. The integration of DRL and LSTM enables the process to effectively tackle the challenges associated with workload planning in edge computing, leading to improved system performance. The proposed DRL-LSTM approach offers a promising solution for optimizing workload planning in edge computing environments. Combining the power of Deep Reinforcement Learning, Long Short-Term Memory architecture, and Deep-Q-Network algorithms facilitates efficient resource allocation, reduces service time, and increases task completion rates. It holds significant potential for enhancing the overall performance and effectiveness of edge computing systems.</p>\",\"PeriodicalId\":54817,\"journal\":{\"name\":\"Journal of Grid Computing\",\"volume\":\"29 1\",\"pages\":\"\"},\"PeriodicalIF\":3.6000,\"publicationDate\":\"2024-02-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Grid Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s10723-024-09746-6\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Grid Computing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10723-024-09746-6","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Work Scheduling in Cloud Network Based on Deep Q-LSTM Models for Efficient Resource Utilization
Edge computing has emerged as an innovative paradigm, bringing cloud service resources closer to mobile consumers at the network's edge. This proximity enables efficient processing of computationally demanding and time-sensitive tasks. However, the dynamic nature of the edge network, characterized by a high density of devices, diverse mobile usage patterns, a wide range of applications, and sporadic traffic, often leads to uneven resource distribution. This imbalance hampers system efficiency and contributes to task failures. To overcome these challenges, we propose a novel approach known as the DRL-LSTM approach, which combines Deep Reinforcement Learning (DRL) with Long Short-Term Memory (LSTM) architecture. The primary objective of the DRL-LSTM approach is to optimize workload planning in edge computing environments. Leveraging the capabilities of DRL, this approach effectively handles complex and multidimensional workload planning problems. By incorporating LSTM as a recurrent neural network, it captures and models temporal dependencies in sequential data, enabling efficient workload management, reduced service time, and enhanced task completion rates. Additionally, the DRL-LSTM approach integrates Deep-Q-Network (DQN) algorithms to address the complexity and high dimensionality of workload scheduling problems. Through simulations, we demonstrate that the DRL-LSTM approach outperforms alternative approaches regarding service time, virtual machine (VM) utilization, and the rate of failed tasks. The integration of DRL and LSTM enables the process to effectively tackle the challenges associated with workload planning in edge computing, leading to improved system performance. The proposed DRL-LSTM approach offers a promising solution for optimizing workload planning in edge computing environments. Combining the power of Deep Reinforcement Learning, Long Short-Term Memory architecture, and Deep-Q-Network algorithms facilitates efficient resource allocation, reduces service time, and increases task completion rates. It holds significant potential for enhancing the overall performance and effectiveness of edge computing systems.
期刊介绍:
Grid Computing is an emerging technology that enables large-scale resource sharing and coordinated problem solving within distributed, often loosely coordinated groups-what are sometimes termed "virtual organizations. By providing scalable, secure, high-performance mechanisms for discovering and negotiating access to remote resources, Grid technologies promise to make it possible for scientific collaborations to share resources on an unprecedented scale, and for geographically distributed groups to work together in ways that were previously impossible. Similar technologies are being adopted within industry, where they serve as important building blocks for emerging service provider infrastructures.
Even though the advantages of this technology for classes of applications have been acknowledged, research in a variety of disciplines, including not only multiple domains of computer science (networking, middleware, programming, algorithms) but also application disciplines themselves, as well as such areas as sociology and economics, is needed to broaden the applicability and scope of the current body of knowledge.