{"title":"带能量收集的丢包链路线性控制中传感器传输能量的最优分配","authors":"S. Knorn, S. Dey","doi":"10.1109/CDC.2015.7402374","DOIUrl":null,"url":null,"abstract":"This paper studies a closed loop linear control system. The sensor computes a state estimate and sends it to the controller/actuator in the receiver block over a randomly fading packet dropping link. The receiver sends an ACK/NACK packet to the transmitter over a link. It is assumed that the transmission energy per packet at the sensor depletes a battery of limited capacity, replenished by an energy harvester. The objective is to design an optimal energy allocation policy and an optimal control policy so that a finite horizon LQG control cost is minimized. It is shown that in case the receiver to sensor feedback channel is free of errors, a separation principle holds. Hence, the optimal LQG controller is linear, the Kalman filter is optimal and the optimal energy allocation policy is obtained via solving a backward dynamic programming equation. In case the feedback channel is erroneous, the separation principle does not hold. In this case, we propose a suboptimal policy where the controller still uses a linear control, and the transmitter minimizes an expected sum of the trace of an “estimated” receiver state estimation error covariance matrix. Simulations are used to illustrate the relative performance of the proposed algorithms and various heuristic algorithms for both the perfect and imperfect feedback cases. It is seen that the dynamic programming based policies outperform the simple heuristic policies by a margin.","PeriodicalId":308101,"journal":{"name":"2015 54th IEEE Conference on Decision and Control (CDC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Optimal sensor transmission energy allocation for linear control over a packet dropping link with energy harvesting\",\"authors\":\"S. Knorn, S. Dey\",\"doi\":\"10.1109/CDC.2015.7402374\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper studies a closed loop linear control system. The sensor computes a state estimate and sends it to the controller/actuator in the receiver block over a randomly fading packet dropping link. The receiver sends an ACK/NACK packet to the transmitter over a link. It is assumed that the transmission energy per packet at the sensor depletes a battery of limited capacity, replenished by an energy harvester. The objective is to design an optimal energy allocation policy and an optimal control policy so that a finite horizon LQG control cost is minimized. It is shown that in case the receiver to sensor feedback channel is free of errors, a separation principle holds. Hence, the optimal LQG controller is linear, the Kalman filter is optimal and the optimal energy allocation policy is obtained via solving a backward dynamic programming equation. In case the feedback channel is erroneous, the separation principle does not hold. In this case, we propose a suboptimal policy where the controller still uses a linear control, and the transmitter minimizes an expected sum of the trace of an “estimated” receiver state estimation error covariance matrix. Simulations are used to illustrate the relative performance of the proposed algorithms and various heuristic algorithms for both the perfect and imperfect feedback cases. It is seen that the dynamic programming based policies outperform the simple heuristic policies by a margin.\",\"PeriodicalId\":308101,\"journal\":{\"name\":\"2015 54th IEEE Conference on Decision and Control (CDC)\",\"volume\":\"29 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 54th IEEE Conference on Decision and Control (CDC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CDC.2015.7402374\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 54th IEEE Conference on Decision and Control (CDC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CDC.2015.7402374","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Optimal sensor transmission energy allocation for linear control over a packet dropping link with energy harvesting
This paper studies a closed loop linear control system. The sensor computes a state estimate and sends it to the controller/actuator in the receiver block over a randomly fading packet dropping link. The receiver sends an ACK/NACK packet to the transmitter over a link. It is assumed that the transmission energy per packet at the sensor depletes a battery of limited capacity, replenished by an energy harvester. The objective is to design an optimal energy allocation policy and an optimal control policy so that a finite horizon LQG control cost is minimized. It is shown that in case the receiver to sensor feedback channel is free of errors, a separation principle holds. Hence, the optimal LQG controller is linear, the Kalman filter is optimal and the optimal energy allocation policy is obtained via solving a backward dynamic programming equation. In case the feedback channel is erroneous, the separation principle does not hold. In this case, we propose a suboptimal policy where the controller still uses a linear control, and the transmitter minimizes an expected sum of the trace of an “estimated” receiver state estimation error covariance matrix. Simulations are used to illustrate the relative performance of the proposed algorithms and various heuristic algorithms for both the perfect and imperfect feedback cases. It is seen that the dynamic programming based policies outperform the simple heuristic policies by a margin.