{"title":"Reinforcement learning-based load balancing for heavy traffic Internet of Things","authors":"Jianjun Lei, Jie Liu","doi":"10.1016/j.pmcj.2024.101891","DOIUrl":null,"url":null,"abstract":"<div><p>Aiming to large-scale data transmission requirements of resource-constrained IoT (Internet of Things) devices, the routing protocol for low power lossy network (RPL) is expected to handle the load imbalance and high energy consumption in heavy traffic scenarios. This paper proposes a novel <strong>R</strong>PL routing optimization <strong>A</strong>lgorithm based on deep <strong>R</strong>einforcement <strong>L</strong>earning (referred to as RARL), which employs the centralized training and decentralized execution architecture. Hence, the RARL can provide the intelligent parent selection policy for all nodes while improving the training efficiency of deep reinforcement learning (DRL) model. Furthermore, we integrate a new local observation into the RARL by exploiting multiple routing metrics and design a comprehensive reward function for enhancing the load-balance and energy efficiency. Meanwhile, we also optimize the Trickle timer mechanism for adaptively controlling the delivery of DIO messages, which further improves the interaction efficiency with environment of DRL model. Extensive simulation experiments are conducted to evaluate the effectiveness of RARL under various scenarios. Compared with some existing methods, the simulation results demonstrate the significant performance of RARL in terms of network lifetime, queue loss ratio, and packet reception ratio.</p></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":null,"pages":null},"PeriodicalIF":3.0000,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pervasive and Mobile Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1574119224000178","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Aiming to large-scale data transmission requirements of resource-constrained IoT (Internet of Things) devices, the routing protocol for low power lossy network (RPL) is expected to handle the load imbalance and high energy consumption in heavy traffic scenarios. This paper proposes a novel RPL routing optimization Algorithm based on deep Reinforcement Learning (referred to as RARL), which employs the centralized training and decentralized execution architecture. Hence, the RARL can provide the intelligent parent selection policy for all nodes while improving the training efficiency of deep reinforcement learning (DRL) model. Furthermore, we integrate a new local observation into the RARL by exploiting multiple routing metrics and design a comprehensive reward function for enhancing the load-balance and energy efficiency. Meanwhile, we also optimize the Trickle timer mechanism for adaptively controlling the delivery of DIO messages, which further improves the interaction efficiency with environment of DRL model. Extensive simulation experiments are conducted to evaluate the effectiveness of RARL under various scenarios. Compared with some existing methods, the simulation results demonstrate the significant performance of RARL in terms of network lifetime, queue loss ratio, and packet reception ratio.
期刊介绍:
As envisioned by Mark Weiser as early as 1991, pervasive computing systems and services have truly become integral parts of our daily lives. Tremendous developments in a multitude of technologies ranging from personalized and embedded smart devices (e.g., smartphones, sensors, wearables, IoTs, etc.) to ubiquitous connectivity, via a variety of wireless mobile communications and cognitive networking infrastructures, to advanced computing techniques (including edge, fog and cloud) and user-friendly middleware services and platforms have significantly contributed to the unprecedented advances in pervasive and mobile computing. Cutting-edge applications and paradigms have evolved, such as cyber-physical systems and smart environments (e.g., smart city, smart energy, smart transportation, smart healthcare, etc.) that also involve human in the loop through social interactions and participatory and/or mobile crowd sensing, for example. The goal of pervasive computing systems is to improve human experience and quality of life, without explicit awareness of the underlying communications and computing technologies.
The Pervasive and Mobile Computing Journal (PMC) is a high-impact, peer-reviewed technical journal that publishes high-quality scientific articles spanning theory and practice, and covering all aspects of pervasive and mobile computing and systems.