光伏存储系统的能量管理:ADP方法与时间差分学习

Chanaka Keerthisinghe, G. Verbič, Archie C. Chapman
{"title":"光伏存储系统的能量管理:ADP方法与时间差分学习","authors":"Chanaka Keerthisinghe, G. Verbič, Archie C. Chapman","doi":"10.1109/PSCC.2016.7540924","DOIUrl":null,"url":null,"abstract":"In the future, residential energy users can seize the full potential of demand response schemes by using an automated home energy management system (HEMS) to schedule their distributed energy resources. In order to generate high quality schedules, a HEMS needs to consider the stochastic nature of the PV generation and energy consumption as well as its inter-daily variations over several days. However, extending the decision horizon of proposed optimisation techniques is computationally difficult and moreover, these approaches are only computationally feasible with a limited number of storage devices and a low-resolution decision horizon. Given these existing shortcomings, this paper presents an approximate dynamic programming (ADP) approach with temporal difference learning for implementing a computationally efficient HEMS. In ADP, we obtain policies from value function approximations by stepping forward in time, compared to the value functions obtained by backward induction in DP. We use empirical data collected during the Smart Grid Smart City project in NSW, Australia, to estimate the parameters of a Markov chain model of PV output and electrical demand, which are then used in all simulations. To evaluate the quality of the solutions generated by ADP, we compare the ADP method to stochastic mixed-integer linear programming (MILP) and dynamic programming (DP). Our results show that ADP computes a solution much quicker than both DP and stochastic MILP, while providing better quality solutions than stochastic MILP and only a slight reduction in quality compared to the DP solution. Moreover, unlike the computationally-intensive DP, the ADP approach is able to consider a decision horizon beyond one day while also considering multiple storage devices, which results in a HEMS that can capture additional financial benefits","PeriodicalId":265395,"journal":{"name":"2016 Power Systems Computation Conference (PSCC)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"21","resultStr":"{\"title\":\"Energy management of PV-storage systems: ADP approach with temporal difference learning\",\"authors\":\"Chanaka Keerthisinghe, G. Verbič, Archie C. Chapman\",\"doi\":\"10.1109/PSCC.2016.7540924\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the future, residential energy users can seize the full potential of demand response schemes by using an automated home energy management system (HEMS) to schedule their distributed energy resources. In order to generate high quality schedules, a HEMS needs to consider the stochastic nature of the PV generation and energy consumption as well as its inter-daily variations over several days. However, extending the decision horizon of proposed optimisation techniques is computationally difficult and moreover, these approaches are only computationally feasible with a limited number of storage devices and a low-resolution decision horizon. Given these existing shortcomings, this paper presents an approximate dynamic programming (ADP) approach with temporal difference learning for implementing a computationally efficient HEMS. In ADP, we obtain policies from value function approximations by stepping forward in time, compared to the value functions obtained by backward induction in DP. We use empirical data collected during the Smart Grid Smart City project in NSW, Australia, to estimate the parameters of a Markov chain model of PV output and electrical demand, which are then used in all simulations. To evaluate the quality of the solutions generated by ADP, we compare the ADP method to stochastic mixed-integer linear programming (MILP) and dynamic programming (DP). Our results show that ADP computes a solution much quicker than both DP and stochastic MILP, while providing better quality solutions than stochastic MILP and only a slight reduction in quality compared to the DP solution. Moreover, unlike the computationally-intensive DP, the ADP approach is able to consider a decision horizon beyond one day while also considering multiple storage devices, which results in a HEMS that can capture additional financial benefits\",\"PeriodicalId\":265395,\"journal\":{\"name\":\"2016 Power Systems Computation Conference (PSCC)\",\"volume\":\"35 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-06-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"21\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 Power Systems Computation Conference (PSCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/PSCC.2016.7540924\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 Power Systems Computation Conference (PSCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PSCC.2016.7540924","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 21

摘要

未来,住宅能源用户可以通过使用自动化家庭能源管理系统(HEMS)来调度其分布式能源,从而充分发挥需求响应计划的潜力。为了生成高质量的时间表,HEMS需要考虑光伏发电和能源消耗的随机性以及几天内的日间变化。然而,扩展所提出的优化技术的决策范围在计算上是困难的,此外,这些方法仅在有限数量的存储设备和低分辨率决策范围下计算可行。鉴于这些缺点,本文提出了一种近似动态规划(ADP)方法,并结合时间差分学习来实现计算效率高的HEMS。在ADP中,与在DP中通过逆向归纳法得到的价值函数相比,我们通过时间步进法从价值函数近似中得到策略。我们使用在澳大利亚新南威尔士州智能电网智能城市项目中收集的经验数据来估计光伏输出和电力需求的马尔可夫链模型的参数,然后将其用于所有模拟。为了评价由ADP方法生成的解的质量,我们将ADP方法与随机混合整数线性规划(MILP)和动态规划(DP)方法进行了比较。我们的研究结果表明,ADP计算解决方案比DP和随机MILP快得多,同时提供比随机MILP更好的质量解决方案,与DP解决方案相比,质量仅略有下降。此外,与计算密集型DP不同,ADP方法能够考虑超过一天的决策范围,同时也考虑了多个存储设备,这使得HEMS可以获得额外的经济效益
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Energy management of PV-storage systems: ADP approach with temporal difference learning
In the future, residential energy users can seize the full potential of demand response schemes by using an automated home energy management system (HEMS) to schedule their distributed energy resources. In order to generate high quality schedules, a HEMS needs to consider the stochastic nature of the PV generation and energy consumption as well as its inter-daily variations over several days. However, extending the decision horizon of proposed optimisation techniques is computationally difficult and moreover, these approaches are only computationally feasible with a limited number of storage devices and a low-resolution decision horizon. Given these existing shortcomings, this paper presents an approximate dynamic programming (ADP) approach with temporal difference learning for implementing a computationally efficient HEMS. In ADP, we obtain policies from value function approximations by stepping forward in time, compared to the value functions obtained by backward induction in DP. We use empirical data collected during the Smart Grid Smart City project in NSW, Australia, to estimate the parameters of a Markov chain model of PV output and electrical demand, which are then used in all simulations. To evaluate the quality of the solutions generated by ADP, we compare the ADP method to stochastic mixed-integer linear programming (MILP) and dynamic programming (DP). Our results show that ADP computes a solution much quicker than both DP and stochastic MILP, while providing better quality solutions than stochastic MILP and only a slight reduction in quality compared to the DP solution. Moreover, unlike the computationally-intensive DP, the ADP approach is able to consider a decision horizon beyond one day while also considering multiple storage devices, which results in a HEMS that can capture additional financial benefits
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信