Zhu Ren, Peng Cheng, Jiming Chen, David K. Y. Yau, Youxian Sun
{"title":"Dynamic Activation Policies for Event Capture with Rechargeable Sensors","authors":"Zhu Ren, Peng Cheng, Jiming Chen, David K. Y. Yau, Youxian Sun","doi":"10.1109/ICDCS.2012.70","DOIUrl":null,"url":null,"abstract":"We consider the problem of event capture by a rechargeable sensor network. We assume that the events of interest follow a renewal process whose event inter-arrival times are drawn from a general probability distribution, and that a stochastic recharge process is used to provide energy for the sensors' operation. Dynamics of the event and recharge processes make the optimal sensor activation problem highly challenging. In this paper we first consider the single-sensor problem. Using dynamic control theory, we consider a full-information model in which, independent of its activation schedule, the sensor will know whether an event has occurred in the last time slot or not. In this case, the problem is framed as a Markov decision process (MDP), and we develop a simple and optimal policy for the solution. We then further consider a partial-information model where the sensor knows about the occurrence of an event only when it is active. This problem falls into the class of partially observable Markov decision processes (POMDP). Since the POMDP's optimal policy has exponential computational complexity and is intrinsically hard to solve, we propose an efficient heuristic clustering policy and evaluate its performance. Finally, our solutions are extended to handle a network setting in which multiple sensors collaborate to capture the events. We provide extensive simulation results to evaluate the performance of our solutions.","PeriodicalId":6300,"journal":{"name":"2012 IEEE 32nd International Conference on Distributed Computing Systems","volume":"9 1","pages":"152-162"},"PeriodicalIF":0.0000,"publicationDate":"2012-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE 32nd International Conference on Distributed Computing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDCS.2012.70","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 15
Abstract
We consider the problem of event capture by a rechargeable sensor network. We assume that the events of interest follow a renewal process whose event inter-arrival times are drawn from a general probability distribution, and that a stochastic recharge process is used to provide energy for the sensors' operation. Dynamics of the event and recharge processes make the optimal sensor activation problem highly challenging. In this paper we first consider the single-sensor problem. Using dynamic control theory, we consider a full-information model in which, independent of its activation schedule, the sensor will know whether an event has occurred in the last time slot or not. In this case, the problem is framed as a Markov decision process (MDP), and we develop a simple and optimal policy for the solution. We then further consider a partial-information model where the sensor knows about the occurrence of an event only when it is active. This problem falls into the class of partially observable Markov decision processes (POMDP). Since the POMDP's optimal policy has exponential computational complexity and is intrinsically hard to solve, we propose an efficient heuristic clustering policy and evaluate its performance. Finally, our solutions are extended to handle a network setting in which multiple sensors collaborate to capture the events. We provide extensive simulation results to evaluate the performance of our solutions.