{"title":"支持 MEC 的蜂窝物联网网络中基于深度强化学习的移动性管理","authors":"Homayun Kabir , Mau-Luen Tham , Yoong Choon Chang , Chee-Onn Chow","doi":"10.1016/j.pmcj.2024.101987","DOIUrl":null,"url":null,"abstract":"<div><div>Mobile Edge Computing (MEC) has paved the way for new Cellular Internet of Things (CIoT) paradigm, where resource constrained CIoT Devices (CDs) can offload tasks to a computing server located at either a Base Station (BS) or an edge node. For CDs moving in high speed, seamless mobility is crucial during the MEC service migration from one base station (BS) to another. In this paper, we investigate the problem of joint power allocation and Handover (HO) management in a MEC network with a Deep Reinforcement Learning (DRL) approach. To handle the hybrid action space (continuous: power allocation and discrete: HO decision), we leverage Parameterized Deep Q-Network (P-DQN) to learn the near-optimal solution. Simulation results illustrate that the proposed algorithm (P-DQN) outperforms the conventional approaches, such as the nearest BS +random power and random BS +random power, in terms of reward, HO cost, and total power consumption. According to simulation results, HO occurs almost in the edge point of two BS, which means the HO is almost perfectly managed. In addition, the total power consumption is around 0.151 watts in P-DQN while it is about 0.75 watts in nearest BS +random power and random BS +random power.</div></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":null,"pages":null},"PeriodicalIF":3.0000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep reinforcement learning based mobility management in a MEC-Enabled cellular IoT network\",\"authors\":\"Homayun Kabir , Mau-Luen Tham , Yoong Choon Chang , Chee-Onn Chow\",\"doi\":\"10.1016/j.pmcj.2024.101987\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Mobile Edge Computing (MEC) has paved the way for new Cellular Internet of Things (CIoT) paradigm, where resource constrained CIoT Devices (CDs) can offload tasks to a computing server located at either a Base Station (BS) or an edge node. For CDs moving in high speed, seamless mobility is crucial during the MEC service migration from one base station (BS) to another. In this paper, we investigate the problem of joint power allocation and Handover (HO) management in a MEC network with a Deep Reinforcement Learning (DRL) approach. To handle the hybrid action space (continuous: power allocation and discrete: HO decision), we leverage Parameterized Deep Q-Network (P-DQN) to learn the near-optimal solution. Simulation results illustrate that the proposed algorithm (P-DQN) outperforms the conventional approaches, such as the nearest BS +random power and random BS +random power, in terms of reward, HO cost, and total power consumption. According to simulation results, HO occurs almost in the edge point of two BS, which means the HO is almost perfectly managed. In addition, the total power consumption is around 0.151 watts in P-DQN while it is about 0.75 watts in nearest BS +random power and random BS +random power.</div></div>\",\"PeriodicalId\":49005,\"journal\":{\"name\":\"Pervasive and Mobile Computing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-09-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Pervasive and Mobile Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1574119224001123\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pervasive and Mobile Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1574119224001123","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Deep reinforcement learning based mobility management in a MEC-Enabled cellular IoT network
Mobile Edge Computing (MEC) has paved the way for new Cellular Internet of Things (CIoT) paradigm, where resource constrained CIoT Devices (CDs) can offload tasks to a computing server located at either a Base Station (BS) or an edge node. For CDs moving in high speed, seamless mobility is crucial during the MEC service migration from one base station (BS) to another. In this paper, we investigate the problem of joint power allocation and Handover (HO) management in a MEC network with a Deep Reinforcement Learning (DRL) approach. To handle the hybrid action space (continuous: power allocation and discrete: HO decision), we leverage Parameterized Deep Q-Network (P-DQN) to learn the near-optimal solution. Simulation results illustrate that the proposed algorithm (P-DQN) outperforms the conventional approaches, such as the nearest BS +random power and random BS +random power, in terms of reward, HO cost, and total power consumption. According to simulation results, HO occurs almost in the edge point of two BS, which means the HO is almost perfectly managed. In addition, the total power consumption is around 0.151 watts in P-DQN while it is about 0.75 watts in nearest BS +random power and random BS +random power.
期刊介绍:
As envisioned by Mark Weiser as early as 1991, pervasive computing systems and services have truly become integral parts of our daily lives. Tremendous developments in a multitude of technologies ranging from personalized and embedded smart devices (e.g., smartphones, sensors, wearables, IoTs, etc.) to ubiquitous connectivity, via a variety of wireless mobile communications and cognitive networking infrastructures, to advanced computing techniques (including edge, fog and cloud) and user-friendly middleware services and platforms have significantly contributed to the unprecedented advances in pervasive and mobile computing. Cutting-edge applications and paradigms have evolved, such as cyber-physical systems and smart environments (e.g., smart city, smart energy, smart transportation, smart healthcare, etc.) that also involve human in the loop through social interactions and participatory and/or mobile crowd sensing, for example. The goal of pervasive computing systems is to improve human experience and quality of life, without explicit awareness of the underlying communications and computing technologies.
The Pervasive and Mobile Computing Journal (PMC) is a high-impact, peer-reviewed technical journal that publishes high-quality scientific articles spanning theory and practice, and covering all aspects of pervasive and mobile computing and systems.