{"title":"Using smart devices for system-level management and control in the smart grid: A reinforcement learning framework","authors":"E. Kara, M. Berges, B. Krogh, S. Kar","doi":"10.1109/SmartGridComm.2012.6485964","DOIUrl":null,"url":null,"abstract":"This paper presents a stochastic modeling framework to employ adaptive control strategies in order to provide short term ancillary services to the power grid by using a population of heterogenous thermostatically controlled loads. The problem is cast anew as a classical Markov Decision Process (MDP) to leverage existing tools in the field of reinforcement learning. Initial considerations and possible reductions in the action and state spaces are described. A Q-learning approach is implemented in simulation to demonstrate how the performance of the new MDP representation is comparable to that of a Linear Time-Invariant (LTI) one on a reference tracking scenario.","PeriodicalId":143915,"journal":{"name":"2012 IEEE Third International Conference on Smart Grid Communications (SmartGridComm)","volume":"158 1-2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"44","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE Third International Conference on Smart Grid Communications (SmartGridComm)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SmartGridComm.2012.6485964","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 44
Abstract
This paper presents a stochastic modeling framework to employ adaptive control strategies in order to provide short term ancillary services to the power grid by using a population of heterogenous thermostatically controlled loads. The problem is cast anew as a classical Markov Decision Process (MDP) to leverage existing tools in the field of reinforcement learning. Initial considerations and possible reductions in the action and state spaces are described. A Q-learning approach is implemented in simulation to demonstrate how the performance of the new MDP representation is comparable to that of a Linear Time-Invariant (LTI) one on a reference tracking scenario.