Scaling configuration of energy harvesting sensors with reinforcement learning

Francesco Fraternali, Bharathan Balaji, Rajesh E. Gupta
{"title":"Scaling configuration of energy harvesting sensors with reinforcement learning","authors":"Francesco Fraternali, Bharathan Balaji, Rajesh E. Gupta","doi":"10.1145/3279755.3279760","DOIUrl":null,"url":null,"abstract":"With the advent of the Internet of Things (IoT), an increasing number of energy harvesting methods are being used to supplement or supplant battery based sensors. Energy harvesting sensors need to be configured according to the application, hardware, and environmental conditions to maximize their usefulness. As of today, the configuration of sensors is either manual or heuristics based, requiring valuable domain expertise. Reinforcement learning (RL) is a promising approach to automate configuration and efficiently scale IoT deployments, but it is not yet adopted in practice. We propose solutions to bridge this gap: reduce the training phase of RL so that nodes are operational within a short time after deployment and reduce the computational requirements to scale to large deployments. We focus on configuration of the sampling rate of indoor solar panel based energy harvesting sensors. We created a simulator based on 3 months of data collected from 5 sensor nodes subject to different lighting conditions. Our simulation results show that RL can effectively learn energy availability patterns and configure the sampling rate of the sensor nodes to maximize the sensing data while ensuring that energy storage is not depleted. The nodes can be operational within the first day by using our methods. We show that it is possible to reduce the number of RL policies by using a single policy for nodes that share similar lighting conditions.","PeriodicalId":376211,"journal":{"name":"Proceedings of the 6th International Workshop on Energy Harvesting & Energy-Neutral Sensing Systems","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"27","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 6th International Workshop on Energy Harvesting & Energy-Neutral Sensing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3279755.3279760","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 27

Abstract

With the advent of the Internet of Things (IoT), an increasing number of energy harvesting methods are being used to supplement or supplant battery based sensors. Energy harvesting sensors need to be configured according to the application, hardware, and environmental conditions to maximize their usefulness. As of today, the configuration of sensors is either manual or heuristics based, requiring valuable domain expertise. Reinforcement learning (RL) is a promising approach to automate configuration and efficiently scale IoT deployments, but it is not yet adopted in practice. We propose solutions to bridge this gap: reduce the training phase of RL so that nodes are operational within a short time after deployment and reduce the computational requirements to scale to large deployments. We focus on configuration of the sampling rate of indoor solar panel based energy harvesting sensors. We created a simulator based on 3 months of data collected from 5 sensor nodes subject to different lighting conditions. Our simulation results show that RL can effectively learn energy availability patterns and configure the sampling rate of the sensor nodes to maximize the sensing data while ensuring that energy storage is not depleted. The nodes can be operational within the first day by using our methods. We show that it is possible to reduce the number of RL policies by using a single policy for nodes that share similar lighting conditions.
基于强化学习的能量采集传感器的缩放配置
随着物联网(IoT)的出现,越来越多的能量收集方法被用于补充或替代基于电池的传感器。能量收集传感器需要根据应用、硬件和环境条件进行配置,以最大限度地发挥其效用。到目前为止,传感器的配置要么是手动的,要么是基于启发式的,需要有价值的领域专业知识。强化学习(RL)是一种很有前途的自动化配置和有效扩展物联网部署的方法,但尚未在实践中采用。我们提出了弥补这一差距的解决方案:减少RL的训练阶段,以便节点在部署后的短时间内运行,并减少计算需求以扩展到大型部署。重点研究了基于室内太阳能电池板的能量采集传感器的采样率配置。我们基于3个月来在不同光照条件下从5个传感器节点收集的数据创建了一个模拟器。我们的仿真结果表明,RL可以有效地学习能量可用性模式,并配置传感器节点的采样率,以最大限度地提高传感数据,同时确保能量存储不耗尽。使用我们的方法,节点可以在第一天内运行。我们表明,通过对共享相似照明条件的节点使用单个策略,可以减少RL策略的数量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信