PEARL: Power and Delay-Aware Learning-based Routing Policy for IoT Applications

IF 0.5 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Sahar Rezagholi Lalani, Bardia Safaei, A. H. Hosseini Monazzah, A. Ejlali
{"title":"PEARL: Power and Delay-Aware Learning-based Routing Policy for IoT Applications","authors":"Sahar Rezagholi Lalani, Bardia Safaei, A. H. Hosseini Monazzah, A. Ejlali","doi":"10.1109/rtest56034.2022.9849862","DOIUrl":null,"url":null,"abstract":"Routing between the IoT nodes has been considered an important challenge, due to its impact on different link/node metrics, including power consumption, reliability, and latency. Due to the low-power and lossy nature of IoT environments, the amount of consumed power, and the ratio of delivered packets plays an important role in the overall performance of the system. Meanwhile, in some IoT applications, e.g., remote health-care monitoring systems, other factors such as End-to-End (E2E) latency is significantly crucial. The standardized routing mechanism for IoT networks (RPL) tries to optimize these parameters via specified routing policies in its Objective Function (OF). The original version of this protocol, and many of its existing extensions are not well-suited for dynamic IoT networks. In the past few years, reinforcement learning methods have significantly involved in dynamic systems, where agents have no acknowledgment about their surrounding environment. These techniques provide a predictive model based on the interaction between an agent and its environment to reach a semi-optimized solution; For instance, the matter of packet transmission, and their delivery in unstable IoT networks. Accordingly, this paper introduces PEARL; a machine-learning based routing policy for IoT networks, which is both, delay-aware, and power-efficient. PEARL employs a novel routing policy based on the q-learning algorithm, which uses the one-hop E2E delay as its main path selection metric to determine the rewards of the algorithm, and to improve the E2E delay, and consumed power simultaneously in terms of Power-Delay-Product (PDP). According to an extensive set of experiments conducted in the Cooja simulator, in addition to improving reliability in the network in terms of Packet Delivery Ratio (PDR), PEARL has improved the amount of E2E delay, and PDP metrics in the network by up to 61% and 72%, against the state-of-the-art, respectively.","PeriodicalId":38446,"journal":{"name":"International Journal of Embedded and Real-Time Communication Systems (IJERTCS)","volume":"16 1","pages":"1-8"},"PeriodicalIF":0.5000,"publicationDate":"2022-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Embedded and Real-Time Communication Systems (IJERTCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/rtest56034.2022.9849862","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

Abstract

Routing between the IoT nodes has been considered an important challenge, due to its impact on different link/node metrics, including power consumption, reliability, and latency. Due to the low-power and lossy nature of IoT environments, the amount of consumed power, and the ratio of delivered packets plays an important role in the overall performance of the system. Meanwhile, in some IoT applications, e.g., remote health-care monitoring systems, other factors such as End-to-End (E2E) latency is significantly crucial. The standardized routing mechanism for IoT networks (RPL) tries to optimize these parameters via specified routing policies in its Objective Function (OF). The original version of this protocol, and many of its existing extensions are not well-suited for dynamic IoT networks. In the past few years, reinforcement learning methods have significantly involved in dynamic systems, where agents have no acknowledgment about their surrounding environment. These techniques provide a predictive model based on the interaction between an agent and its environment to reach a semi-optimized solution; For instance, the matter of packet transmission, and their delivery in unstable IoT networks. Accordingly, this paper introduces PEARL; a machine-learning based routing policy for IoT networks, which is both, delay-aware, and power-efficient. PEARL employs a novel routing policy based on the q-learning algorithm, which uses the one-hop E2E delay as its main path selection metric to determine the rewards of the algorithm, and to improve the E2E delay, and consumed power simultaneously in terms of Power-Delay-Product (PDP). According to an extensive set of experiments conducted in the Cooja simulator, in addition to improving reliability in the network in terms of Packet Delivery Ratio (PDR), PEARL has improved the amount of E2E delay, and PDP metrics in the network by up to 61% and 72%, against the state-of-the-art, respectively.
PEARL:物联网应用中基于功耗和延迟感知学习的路由策略
物联网节点之间的路由被认为是一个重要的挑战,因为它会影响不同的链路/节点指标,包括功耗、可靠性和延迟。由于物联网环境的低功耗和有损特性,因此消耗的功耗和传输数据包的比例在系统的整体性能中起着重要作用。同时,在一些物联网应用中,例如远程医疗监控系统,端到端(E2E)延迟等其他因素至关重要。物联网网络的标准化路由机制(RPL)试图通过其目标函数(OF)中指定的路由策略来优化这些参数。该协议的原始版本及其许多现有扩展并不适合动态物联网网络。在过去的几年里,强化学习方法在动态系统中得到了显著的应用,在动态系统中,智能体对周围的环境没有认识。这些技术提供了基于智能体与其环境之间相互作用的预测模型,以达到半优化解决方案;例如,数据包传输的问题,以及它们在不稳定的物联网网络中的传输。据此,本文介绍了PEARL;一种基于机器学习的物联网网络路由策略,既具有延迟意识,又节能。PEARL采用了一种新的基于q-learning算法的路由策略,以端到端一跳延迟作为主要的路径选择度量来确定算法的奖励,提高了端到端延迟,同时以功率延迟积(power - delay - product, PDP)表示消耗的功率。根据在Cooja模拟器上进行的一系列广泛的实验,除了提高网络的分组传输比(PDR)的可靠性外,PEARL还将网络中的端到端延迟量和PDP指标分别提高了61%和72%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
1.70
自引率
14.30%
发文量
17
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信