{"title":"多接入边缘网络中任务分流的延迟感知节能强化学习方法","authors":"Alireza Aghasi, R. Rituraj","doi":"10.1109/CANDO-EPE57516.2022.10046357","DOIUrl":null,"url":null,"abstract":"Since some cloud resources are located as edge servers near mobile devices, these devices can offload some of their tasks to those servers. This will accelerate the task execution to meet the increasing computing demands of mobile applications. Various approaches have been proposed to make offloading decisions about offloading. In this paper we present a Reinforcement Learning(RL) approach that considers delayed feedback from the environment, which is more realistic than conventional RL methods. The simulation results show that the proposed method succeeded to handle the random delayed feedback of the environment properly and enhanced the conventional reinforcement methods significantly.","PeriodicalId":127258,"journal":{"name":"2022 IEEE 5th International Conference and Workshop Óbuda on Electrical and Power Engineering (CANDO-EPE)","volume":"212 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Latency-Aware Power-efficient Reinforcement Learning Approach for Task Offloading in Multi-Access Edge Networks\",\"authors\":\"Alireza Aghasi, R. Rituraj\",\"doi\":\"10.1109/CANDO-EPE57516.2022.10046357\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Since some cloud resources are located as edge servers near mobile devices, these devices can offload some of their tasks to those servers. This will accelerate the task execution to meet the increasing computing demands of mobile applications. Various approaches have been proposed to make offloading decisions about offloading. In this paper we present a Reinforcement Learning(RL) approach that considers delayed feedback from the environment, which is more realistic than conventional RL methods. The simulation results show that the proposed method succeeded to handle the random delayed feedback of the environment properly and enhanced the conventional reinforcement methods significantly.\",\"PeriodicalId\":127258,\"journal\":{\"name\":\"2022 IEEE 5th International Conference and Workshop Óbuda on Electrical and Power Engineering (CANDO-EPE)\",\"volume\":\"212 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 5th International Conference and Workshop Óbuda on Electrical and Power Engineering (CANDO-EPE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CANDO-EPE57516.2022.10046357\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 5th International Conference and Workshop Óbuda on Electrical and Power Engineering (CANDO-EPE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CANDO-EPE57516.2022.10046357","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Latency-Aware Power-efficient Reinforcement Learning Approach for Task Offloading in Multi-Access Edge Networks
Since some cloud resources are located as edge servers near mobile devices, these devices can offload some of their tasks to those servers. This will accelerate the task execution to meet the increasing computing demands of mobile applications. Various approaches have been proposed to make offloading decisions about offloading. In this paper we present a Reinforcement Learning(RL) approach that considers delayed feedback from the environment, which is more realistic than conventional RL methods. The simulation results show that the proposed method succeeded to handle the random delayed feedback of the environment properly and enhanced the conventional reinforcement methods significantly.