Jing Zhang, Jun Du, Chunxiao Jiang, Yuan Shen, Jian Wang
{"title":"基于连续深度强化学习的能量收集系统计算卸载","authors":"Jing Zhang, Jun Du, Chunxiao Jiang, Yuan Shen, Jian Wang","doi":"10.1109/ICC40277.2020.9148938","DOIUrl":null,"url":null,"abstract":"As a promising technology to improve the computation experience for mobile devices, mobile edge computing (MEC) is becoming an emerging paradigm to meet the tremendous increasing computation demands. In this paper, a mobile edge computing system consisting of multiple mobile devices with energy harvesting and an edge server is considered. Specifically, multiple devices decide the offloading ratio and local computation capacity, which are both in continuous values. Each device equips a task load queue and energy harvesting, which increases the system dynamics and leads to the time-dependence of the optimal offloading decision. In order to minimize the sum cost of the execution time and energy consumption in the long-term, we develop a continuous control based deep reinforcement learning algorithm for computation offloading. Utilizing the actor-critic learning approach, we propose a centralized learning policy for each device. By incorporating the states of other devices with centralized learning, the proposed method learns to coordinate among all devices. Simulation results validate the effectiveness of our proposed algorithm, which demonstrates superior generalization ability and achieves a better performance compared with discrete decision based deep reinforcement learning methods.","PeriodicalId":106560,"journal":{"name":"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Computation Offloading in Energy Harvesting Systems via Continuous Deep Reinforcement Learning\",\"authors\":\"Jing Zhang, Jun Du, Chunxiao Jiang, Yuan Shen, Jian Wang\",\"doi\":\"10.1109/ICC40277.2020.9148938\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As a promising technology to improve the computation experience for mobile devices, mobile edge computing (MEC) is becoming an emerging paradigm to meet the tremendous increasing computation demands. In this paper, a mobile edge computing system consisting of multiple mobile devices with energy harvesting and an edge server is considered. Specifically, multiple devices decide the offloading ratio and local computation capacity, which are both in continuous values. Each device equips a task load queue and energy harvesting, which increases the system dynamics and leads to the time-dependence of the optimal offloading decision. In order to minimize the sum cost of the execution time and energy consumption in the long-term, we develop a continuous control based deep reinforcement learning algorithm for computation offloading. Utilizing the actor-critic learning approach, we propose a centralized learning policy for each device. By incorporating the states of other devices with centralized learning, the proposed method learns to coordinate among all devices. Simulation results validate the effectiveness of our proposed algorithm, which demonstrates superior generalization ability and achieves a better performance compared with discrete decision based deep reinforcement learning methods.\",\"PeriodicalId\":106560,\"journal\":{\"name\":\"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICC40277.2020.9148938\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ICC 2020 - 2020 IEEE International Conference on Communications (ICC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICC40277.2020.9148938","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Computation Offloading in Energy Harvesting Systems via Continuous Deep Reinforcement Learning
As a promising technology to improve the computation experience for mobile devices, mobile edge computing (MEC) is becoming an emerging paradigm to meet the tremendous increasing computation demands. In this paper, a mobile edge computing system consisting of multiple mobile devices with energy harvesting and an edge server is considered. Specifically, multiple devices decide the offloading ratio and local computation capacity, which are both in continuous values. Each device equips a task load queue and energy harvesting, which increases the system dynamics and leads to the time-dependence of the optimal offloading decision. In order to minimize the sum cost of the execution time and energy consumption in the long-term, we develop a continuous control based deep reinforcement learning algorithm for computation offloading. Utilizing the actor-critic learning approach, we propose a centralized learning policy for each device. By incorporating the states of other devices with centralized learning, the proposed method learns to coordinate among all devices. Simulation results validate the effectiveness of our proposed algorithm, which demonstrates superior generalization ability and achieves a better performance compared with discrete decision based deep reinforcement learning methods.