{"title":"一种基于q -学习的云雾计算任务前瞻性调度的有效方法","authors":"Yan Jin","doi":"10.1016/j.engappai.2025.110705","DOIUrl":null,"url":null,"abstract":"<div><div>The increasing energy consumption in cloud computing data centers has become a significant concern due to the expanding scale of computational demands. Efficient task scheduling is crucial to optimizing resource utilization while reducing operational costs and energy consumption. This study proposes a <strong>M</strong>ulti-<strong>A</strong>gent <strong>R</strong>einforcement <strong>L</strong>earning (MARL)-based scheduling framework that enhances system efficiency by dynamically allocating tasks based on environmental variations and workload fluctuations. Unlike conventional methods, MARL allows multiple intelligent agents to collaboratively optimize scheduling decisions, leading to superior adaptability and performance. The proposed approach consists of two steps: first, a centralized task dispatcher assigns incoming tasks to cloud servers using a queuing model. Second, an MARL-based scheduler on each server prioritizes and allocates tasks to virtual machines while continuously updating scheduling policies to maximize efficiency. The framework is evaluated using a CloudSim-based simulation environment to ensure a realistic and controlled assessment. Experimental results demonstrate that the proposed method reduces energy consumption by an average of 51.34 %, improves CPU utilization efficiency, and decreases response time by 44.35 % compared to traditional scheduling techniques, including First In-First Out (FIFO), Greedy, and Queue-based Scheduling (Q-sch). By leveraging MARL, the scheduler effectively minimizes waiting times and optimizes task completion rates, ensuring a balance between energy efficiency and system performance. This work highlights the advantages of reinforcement learning in cloud-fog computing and underscores its potential for intelligent resource management.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"151 ","pages":"Article 110705"},"PeriodicalIF":8.0000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An effective method for prospective scheduling of tasks in cloud-fog computing with an energy consumption management approach based on Q-learning\",\"authors\":\"Yan Jin\",\"doi\":\"10.1016/j.engappai.2025.110705\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The increasing energy consumption in cloud computing data centers has become a significant concern due to the expanding scale of computational demands. Efficient task scheduling is crucial to optimizing resource utilization while reducing operational costs and energy consumption. This study proposes a <strong>M</strong>ulti-<strong>A</strong>gent <strong>R</strong>einforcement <strong>L</strong>earning (MARL)-based scheduling framework that enhances system efficiency by dynamically allocating tasks based on environmental variations and workload fluctuations. Unlike conventional methods, MARL allows multiple intelligent agents to collaboratively optimize scheduling decisions, leading to superior adaptability and performance. The proposed approach consists of two steps: first, a centralized task dispatcher assigns incoming tasks to cloud servers using a queuing model. Second, an MARL-based scheduler on each server prioritizes and allocates tasks to virtual machines while continuously updating scheduling policies to maximize efficiency. The framework is evaluated using a CloudSim-based simulation environment to ensure a realistic and controlled assessment. Experimental results demonstrate that the proposed method reduces energy consumption by an average of 51.34 %, improves CPU utilization efficiency, and decreases response time by 44.35 % compared to traditional scheduling techniques, including First In-First Out (FIFO), Greedy, and Queue-based Scheduling (Q-sch). By leveraging MARL, the scheduler effectively minimizes waiting times and optimizes task completion rates, ensuring a balance between energy efficiency and system performance. This work highlights the advantages of reinforcement learning in cloud-fog computing and underscores its potential for intelligent resource management.</div></div>\",\"PeriodicalId\":50523,\"journal\":{\"name\":\"Engineering Applications of Artificial Intelligence\",\"volume\":\"151 \",\"pages\":\"Article 110705\"},\"PeriodicalIF\":8.0000,\"publicationDate\":\"2025-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering Applications of Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0952197625007055\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197625007055","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
An effective method for prospective scheduling of tasks in cloud-fog computing with an energy consumption management approach based on Q-learning
The increasing energy consumption in cloud computing data centers has become a significant concern due to the expanding scale of computational demands. Efficient task scheduling is crucial to optimizing resource utilization while reducing operational costs and energy consumption. This study proposes a Multi-Agent Reinforcement Learning (MARL)-based scheduling framework that enhances system efficiency by dynamically allocating tasks based on environmental variations and workload fluctuations. Unlike conventional methods, MARL allows multiple intelligent agents to collaboratively optimize scheduling decisions, leading to superior adaptability and performance. The proposed approach consists of two steps: first, a centralized task dispatcher assigns incoming tasks to cloud servers using a queuing model. Second, an MARL-based scheduler on each server prioritizes and allocates tasks to virtual machines while continuously updating scheduling policies to maximize efficiency. The framework is evaluated using a CloudSim-based simulation environment to ensure a realistic and controlled assessment. Experimental results demonstrate that the proposed method reduces energy consumption by an average of 51.34 %, improves CPU utilization efficiency, and decreases response time by 44.35 % compared to traditional scheduling techniques, including First In-First Out (FIFO), Greedy, and Queue-based Scheduling (Q-sch). By leveraging MARL, the scheduler effectively minimizes waiting times and optimizes task completion rates, ensuring a balance between energy efficiency and system performance. This work highlights the advantages of reinforcement learning in cloud-fog computing and underscores its potential for intelligent resource management.
期刊介绍:
Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.