{"title":"A Request Scheduling Optimization Mechanism Based on Deep Q-Learning in Edge Computing Environments","authors":"Yaqiang Zhang, Rengang Li, Yaqian Zhao, Ruyang Li","doi":"10.1109/INFOCOMWKSHPS51825.2021.9484512","DOIUrl":null,"url":null,"abstract":"While there have been many explorations about the offloading and scheduling of atomic user requests, the incoming requests with task-dependency, which can be represented as Directed Acyclic Graphs (DAG), are rarely investigated in recent works. In this paper, an online-based concurrent request scheduling mechanism is proposed, where the user requests are split into a set of tasks and are assigned to different edge servers in terms of their status. To optimize the requests scheduling policy in each time slot for minimizing the long term average system delay, we model it as an Markov Decision Process (MDP). Further, a Deep Reinforcement Learning (DRL)-based mechanism is applied to promote the scheduling policy and make decision in each step. Extensive experiments are conducted, and evaluation results demonstrate that our proposed DRL-based technique can effectively improve the long-term performance of scheduling system, compared with the baseline mechanism.","PeriodicalId":109588,"journal":{"name":"IEEE INFOCOM 2021 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","volume":"221 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE INFOCOM 2021 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFOCOMWKSHPS51825.2021.9484512","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
While there have been many explorations about the offloading and scheduling of atomic user requests, the incoming requests with task-dependency, which can be represented as Directed Acyclic Graphs (DAG), are rarely investigated in recent works. In this paper, an online-based concurrent request scheduling mechanism is proposed, where the user requests are split into a set of tasks and are assigned to different edge servers in terms of their status. To optimize the requests scheduling policy in each time slot for minimizing the long term average system delay, we model it as an Markov Decision Process (MDP). Further, a Deep Reinforcement Learning (DRL)-based mechanism is applied to promote the scheduling policy and make decision in each step. Extensive experiments are conducted, and evaluation results demonstrate that our proposed DRL-based technique can effectively improve the long-term performance of scheduling system, compared with the baseline mechanism.