{"title":"基于sdn的移动边缘计算在线资源分配:强化方法","authors":"Huatong Jiang, Yanjun Li, Meihui Gao","doi":"10.1109/GLOBECOM46510.2021.9685614","DOIUrl":null,"url":null,"abstract":"To meet the real-time requirement of the edge computing applications, technologies of software defined network and network function virtualization are introduced to reconstruct the MEC system. On this basis, we consider the design of online computing and communication resource allocation solution, aiming at maximizing the long-term average rate of successfully processing the real-time tasks. The problem is formulated in a Markov decision process framework. Both Q-learning and deep reinforcement learning algorithms are proposed to obtain online resource allocation solutions with consideration of time-varying channel conditions and task loads. Simulation results show that both proposed algorithms converge quickly and the average real-time task processing success rate achieved by deep reinforcement learning algorithm is the highest among all the baseline algorithms.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Online Resource Allocation for SDN-Based Mobile Edge Computing: Reinforcement Approaches\",\"authors\":\"Huatong Jiang, Yanjun Li, Meihui Gao\",\"doi\":\"10.1109/GLOBECOM46510.2021.9685614\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"To meet the real-time requirement of the edge computing applications, technologies of software defined network and network function virtualization are introduced to reconstruct the MEC system. On this basis, we consider the design of online computing and communication resource allocation solution, aiming at maximizing the long-term average rate of successfully processing the real-time tasks. The problem is formulated in a Markov decision process framework. Both Q-learning and deep reinforcement learning algorithms are proposed to obtain online resource allocation solutions with consideration of time-varying channel conditions and task loads. Simulation results show that both proposed algorithms converge quickly and the average real-time task processing success rate achieved by deep reinforcement learning algorithm is the highest among all the baseline algorithms.\",\"PeriodicalId\":200641,\"journal\":{\"name\":\"2021 IEEE Global Communications Conference (GLOBECOM)\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Global Communications Conference (GLOBECOM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/GLOBECOM46510.2021.9685614\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Global Communications Conference (GLOBECOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GLOBECOM46510.2021.9685614","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Online Resource Allocation for SDN-Based Mobile Edge Computing: Reinforcement Approaches
To meet the real-time requirement of the edge computing applications, technologies of software defined network and network function virtualization are introduced to reconstruct the MEC system. On this basis, we consider the design of online computing and communication resource allocation solution, aiming at maximizing the long-term average rate of successfully processing the real-time tasks. The problem is formulated in a Markov decision process framework. Both Q-learning and deep reinforcement learning algorithms are proposed to obtain online resource allocation solutions with consideration of time-varying channel conditions and task loads. Simulation results show that both proposed algorithms converge quickly and the average real-time task processing success rate achieved by deep reinforcement learning algorithm is the highest among all the baseline algorithms.