A. Gorovits, Karyn Doke, Lin Zhang, M. Zheleva, Petko Bogdanov
{"title":"CORE: Connectivity Optimization via REinforcement Learning in WANETs","authors":"A. Gorovits, Karyn Doke, Lin Zhang, M. Zheleva, Petko Bogdanov","doi":"10.1109/SECON52354.2021.9491597","DOIUrl":null,"url":null,"abstract":"While mobile devices are ubiquitous, their supporting communication infrastructure is cost-effective only in densely populated urban areas and is often lacking in rural settings. This lack of connectivity leads to lost opportunities in applications such as rural emergency preparedness and response. Peer-to-peer exchange that uses predictable human mobility can enable delay-tolerant information access in rural settings. We propose, an adaptive distributed solution for device-to-device Connectivity Optimization via REinforcement Learning (CORE) in wireless adhoc networks. Our solution is designed for collaborative distributed agents with intermittent connectivity and limited battery power, but predictable mobility within short temporal horizons. We seek to maximize the utility of connection attempts while keeping the power expenditure within a predefined battery budget. Agents learn to adaptively make automated decisions for when to attempt connections and exchange information, based on a local RL model of their mobility and that of other agents they learn about from exchanges. Using both synthetic and real-world mobility traces, we demonstrate that agents are able to materialize 95% of the possible connections using 20% of their battery and successfully adapting to changes in the underlying mobility patterns within several days of learning.","PeriodicalId":120945,"journal":{"name":"2021 18th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 18th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SECON52354.2021.9491597","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
While mobile devices are ubiquitous, their supporting communication infrastructure is cost-effective only in densely populated urban areas and is often lacking in rural settings. This lack of connectivity leads to lost opportunities in applications such as rural emergency preparedness and response. Peer-to-peer exchange that uses predictable human mobility can enable delay-tolerant information access in rural settings. We propose, an adaptive distributed solution for device-to-device Connectivity Optimization via REinforcement Learning (CORE) in wireless adhoc networks. Our solution is designed for collaborative distributed agents with intermittent connectivity and limited battery power, but predictable mobility within short temporal horizons. We seek to maximize the utility of connection attempts while keeping the power expenditure within a predefined battery budget. Agents learn to adaptively make automated decisions for when to attempt connections and exchange information, based on a local RL model of their mobility and that of other agents they learn about from exchanges. Using both synthetic and real-world mobility traces, we demonstrate that agents are able to materialize 95% of the possible connections using 20% of their battery and successfully adapting to changes in the underlying mobility patterns within several days of learning.