Wenhao Fan;Xiongfei Chun;Zhiyu Fan;Ruimin Zhang;Siyang Liu;Yuan'an Liu
{"title":"Dual-Agent DRL-Based Service Placement, Task Scheduling, and Resource Allocation for Multi-Sensor and Multi-User Edge Computing Networks","authors":"Wenhao Fan;Xiongfei Chun;Zhiyu Fan;Ruimin Zhang;Siyang Liu;Yuan'an Liu","doi":"10.1109/TNSE.2025.3560402","DOIUrl":null,"url":null,"abstract":"Multi-sensor and multi-user edge computing networks can support various data-intensive Internet of Things (IoT) applications, which exhibit the characteristic of a task-data-decoupled pattern. In this scenario, tasks generated by users can be scheduled to edge servers (ESs), where the ESs compute the task results and return them to the users. Meanwhile, a large number of sensors collect and upload data to the ESs to meet the requirements of task processing. However, existing works mainly consider the task-data-coupled pattern, overlooking the cost associated with sensor data collection processes. Therefore, we propose a joint optimization problem involving service placement, task scheduling, and resource allocation to minimize the total system cost, defined as the weighted sum of delay and energy consumption of each user and sensor. We jointly optimize service placement, user task scheduling, transmit power allocation for sensors and users, computing resource allocation for both the ESs and the cloud server (CS), and transmission rate allocation for ES-ES and ES-CS connections. Considering the differences in the update frequencies of the optimization variables, we propose a dual-agent Deep Reinforcement Learning (DRL) algorithm, which utilizes two SD3 (Softmax Deep Double Deterministic Policy Gradients)-based DRL agents to make service placement and task scheduling decisions asynchronously, while embedding two optimization subroutines to solve the optimal transmit power allocation, computing resource and transmission rate allocations using numerical methods. The complexity and convergence of the algorithm are analyzed and extensive experiments are conducted in 8 different scenarios, demonstrating the superiority of our scheme compared to three other reference schemes.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 5","pages":"3416-3433"},"PeriodicalIF":7.9000,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Network Science and Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10964200/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Multi-sensor and multi-user edge computing networks can support various data-intensive Internet of Things (IoT) applications, which exhibit the characteristic of a task-data-decoupled pattern. In this scenario, tasks generated by users can be scheduled to edge servers (ESs), where the ESs compute the task results and return them to the users. Meanwhile, a large number of sensors collect and upload data to the ESs to meet the requirements of task processing. However, existing works mainly consider the task-data-coupled pattern, overlooking the cost associated with sensor data collection processes. Therefore, we propose a joint optimization problem involving service placement, task scheduling, and resource allocation to minimize the total system cost, defined as the weighted sum of delay and energy consumption of each user and sensor. We jointly optimize service placement, user task scheduling, transmit power allocation for sensors and users, computing resource allocation for both the ESs and the cloud server (CS), and transmission rate allocation for ES-ES and ES-CS connections. Considering the differences in the update frequencies of the optimization variables, we propose a dual-agent Deep Reinforcement Learning (DRL) algorithm, which utilizes two SD3 (Softmax Deep Double Deterministic Policy Gradients)-based DRL agents to make service placement and task scheduling decisions asynchronously, while embedding two optimization subroutines to solve the optimal transmit power allocation, computing resource and transmission rate allocations using numerical methods. The complexity and convergence of the algorithm are analyzed and extensive experiments are conducted in 8 different scenarios, demonstrating the superiority of our scheme compared to three other reference schemes.
期刊介绍:
The proposed journal, called the IEEE Transactions on Network Science and Engineering (TNSE), is committed to timely publishing of peer-reviewed technical articles that deal with the theory and applications of network science and the interconnections among the elements in a system that form a network. In particular, the IEEE Transactions on Network Science and Engineering publishes articles on understanding, prediction, and control of structures and behaviors of networks at the fundamental level. The types of networks covered include physical or engineered networks, information networks, biological networks, semantic networks, economic networks, social networks, and ecological networks. Aimed at discovering common principles that govern network structures, network functionalities and behaviors of networks, the journal seeks articles on understanding, prediction, and control of structures and behaviors of networks. Another trans-disciplinary focus of the IEEE Transactions on Network Science and Engineering is the interactions between and co-evolution of different genres of networks.