Changkui Yin, Yingchi Mao, Meng Chen, Yi Rong, Yinqiu Liu, Xiaoming He
{"title":"Green Computation Offloading With DRL in Multi-Access Edge Computing","authors":"Changkui Yin, Yingchi Mao, Meng Chen, Yi Rong, Yinqiu Liu, Xiaoming He","doi":"10.1002/ett.70003","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>In multi-access edge computing (MEC), computational task offloading of mobile terminals (MT) is expected to provide the green applications with the restriction of energy consumption and service latency. Nevertheless, the diverse statuses of a range of edge servers and mobile terminals, along with the fluctuating offloading routes, present a challenge in the realm of computational task offloading. In order to bolster green applications, we present an innovative computational task offloading model as our initial approach. In particular, the nascent model is constrained by energy consumption and service latency considerations: (1) Smart mobile terminals with computational capabilities could serve as carriers; (2) The diverse computational and communication capacities of edge servers have the potential to enhance the offloading process; (3) The unpredictable routing paths of mobile terminals and edge servers could result in varied information transmissions. We then propose an improved deep reinforcement learning (DRL) algorithm named PS-DDPG with the prioritized experience replay (PER) and the stochastic weight averaging (SWA) mechanisms based on deep deterministic policy gradients (DDPG) to seek an optimal offloading mode, saving energy consumption. Next, we introduce an enhanced deep reinforcement learning (DRL) algorithm named PS-DDPG, incorporating the prioritized experience replay (PER) and stochastic weight averaging (SWA) techniques rooted in deep deterministic policy gradients (DDPG). This approach aims to identify an efficient offloading strategy, thereby reducing energy consumption. Fortunately, <span></span><math></math> algorithm is proposed for each MT, which is responsible for making decisions regarding task partition, channel allocation, and power transmission control. Our developed approach achieves the ultimate estimation of observed values and enhances memory via write operations. The replay buffer holds data from previous <span></span><math></math> time slots to upgrade both the actor and critic networks, followed by a buffer reset. Comprehensive experiments validate the superior performance, including stability and convergence, of our algorithm when juxtaposed with prior studies.</p>\n </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"35 11","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transactions on Emerging Telecommunications Technologies","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ett.70003","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
In multi-access edge computing (MEC), computational task offloading of mobile terminals (MT) is expected to provide the green applications with the restriction of energy consumption and service latency. Nevertheless, the diverse statuses of a range of edge servers and mobile terminals, along with the fluctuating offloading routes, present a challenge in the realm of computational task offloading. In order to bolster green applications, we present an innovative computational task offloading model as our initial approach. In particular, the nascent model is constrained by energy consumption and service latency considerations: (1) Smart mobile terminals with computational capabilities could serve as carriers; (2) The diverse computational and communication capacities of edge servers have the potential to enhance the offloading process; (3) The unpredictable routing paths of mobile terminals and edge servers could result in varied information transmissions. We then propose an improved deep reinforcement learning (DRL) algorithm named PS-DDPG with the prioritized experience replay (PER) and the stochastic weight averaging (SWA) mechanisms based on deep deterministic policy gradients (DDPG) to seek an optimal offloading mode, saving energy consumption. Next, we introduce an enhanced deep reinforcement learning (DRL) algorithm named PS-DDPG, incorporating the prioritized experience replay (PER) and stochastic weight averaging (SWA) techniques rooted in deep deterministic policy gradients (DDPG). This approach aims to identify an efficient offloading strategy, thereby reducing energy consumption. Fortunately, algorithm is proposed for each MT, which is responsible for making decisions regarding task partition, channel allocation, and power transmission control. Our developed approach achieves the ultimate estimation of observed values and enhances memory via write operations. The replay buffer holds data from previous time slots to upgrade both the actor and critic networks, followed by a buffer reset. Comprehensive experiments validate the superior performance, including stability and convergence, of our algorithm when juxtaposed with prior studies.
期刊介绍:
ransactions on Emerging Telecommunications Technologies (ETT), formerly known as European Transactions on Telecommunications (ETT), has the following aims:
- to attract cutting-edge publications from leading researchers and research groups around the world
- to become a highly cited source of timely research findings in emerging fields of telecommunications
- to limit revision and publication cycles to a few months and thus significantly increase attractiveness to publish
- to become the leading journal for publishing the latest developments in telecommunications