Handling Coexistence of LoRa with Other Networks through Embedded Reinforcement Learning

Sezana Fahmida, Venkata Prashant Modekurthy‬, Mahbubur Rahman, Abusayeed Saifullah
{"title":"Handling Coexistence of LoRa with Other Networks through Embedded Reinforcement Learning","authors":"Sezana Fahmida, Venkata Prashant Modekurthy‬, Mahbubur Rahman, Abusayeed Saifullah","doi":"10.1145/3576842.3582383","DOIUrl":null,"url":null,"abstract":"The rapid growth of various Low-Power Wide-Area Network (LPWAN) technologies in the limited spectrum brings forth the challenge of their coexistence. Today, LPWANs are not equipped to handle this impending challenge. It is difficult to employ sophisticated media access control protocol for low-power nodes. Coexistence handling for WiFi or traditional short-range wireless network will not work for LPWANs. Due to long range, their nodes can be subject to an unprecedented number of hidden nodes, requiring highly energy-efficient techniques to handle such coexistence. In this paper, we address the coexistence problem for LoRa, a leading LPWAN technology. To improve the performance of a LoRa network under coexistence with many independent networks, we propose the design of a novel embedded learning agent based on a lightweight reinforcement learning at LoRa nodes. This is done by developing a Q-learning framework while ensuring minimal memory and computation overhead at LoRa nodes. The framework exploits transmission acknowledgments as feedback from the network based on what a node makes transmission decisions. To our knowledge, this is the first Q-learning approach for handling coexistence of low-power networks. Considering various coexistence scenarios of a LoRa network, we evaluate our approach through experiments indoors and outdoors. The outdoor results show that our Q-learning approach on average achieves an improvement of 46% in packet reception rate while reducing energy consumption by 66% in a LoRa network. In indoor experiments, we have observed some coexistence scenarios where a current LoRa network loses all the packets while our approach enables 99% packet reception rate with up to 90% improvement in energy consumption.","PeriodicalId":266438,"journal":{"name":"Proceedings of the 8th ACM/IEEE Conference on Internet of Things Design and Implementation","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 8th ACM/IEEE Conference on Internet of Things Design and Implementation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3576842.3582383","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

The rapid growth of various Low-Power Wide-Area Network (LPWAN) technologies in the limited spectrum brings forth the challenge of their coexistence. Today, LPWANs are not equipped to handle this impending challenge. It is difficult to employ sophisticated media access control protocol for low-power nodes. Coexistence handling for WiFi or traditional short-range wireless network will not work for LPWANs. Due to long range, their nodes can be subject to an unprecedented number of hidden nodes, requiring highly energy-efficient techniques to handle such coexistence. In this paper, we address the coexistence problem for LoRa, a leading LPWAN technology. To improve the performance of a LoRa network under coexistence with many independent networks, we propose the design of a novel embedded learning agent based on a lightweight reinforcement learning at LoRa nodes. This is done by developing a Q-learning framework while ensuring minimal memory and computation overhead at LoRa nodes. The framework exploits transmission acknowledgments as feedback from the network based on what a node makes transmission decisions. To our knowledge, this is the first Q-learning approach for handling coexistence of low-power networks. Considering various coexistence scenarios of a LoRa network, we evaluate our approach through experiments indoors and outdoors. The outdoor results show that our Q-learning approach on average achieves an improvement of 46% in packet reception rate while reducing energy consumption by 66% in a LoRa network. In indoor experiments, we have observed some coexistence scenarios where a current LoRa network loses all the packets while our approach enables 99% packet reception rate with up to 90% improvement in energy consumption.
通过嵌入式强化学习处理LoRa与其他网络的共存
在有限频谱条件下,各种低功耗广域网(LPWAN)技术的快速发展对其共存提出了挑战。目前,lpwan还没有能力应对这一迫在眉睫的挑战。在低功耗节点上采用复杂的媒体访问控制协议是很困难的。WiFi或传统短距离无线网络的共存处理不适用于lpwan。由于距离长,它们的节点可能受到前所未有的隐藏节点数量的影响,需要高能效的技术来处理这种共存。在本文中,我们解决了LoRa共存问题,这是一种领先的LPWAN技术。为了提高LoRa网络在多个独立网络共存情况下的性能,我们提出了一种基于LoRa节点轻量级强化学习的新型嵌入式学习智能体的设计。这是通过开发q学习框架来实现的,同时确保LoRa节点的内存和计算开销最小。该框架利用传输确认作为基于节点做出传输决策的网络反馈。据我们所知,这是第一个处理低功耗网络共存的Q-learning方法。考虑到LoRa网络的各种共存场景,我们通过室内和室外实验来评估我们的方法。室外实验结果表明,我们的Q-learning方法在LoRa网络中平均实现了46%的数据包接收率提高,同时降低了66%的能耗。在室内实验中,我们观察到一些共存场景,其中当前的LoRa网络丢失了所有数据包,而我们的方法使数据包接收率达到99%,能耗提高高达90%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信