Design and Testing of a Demand Response Q-Learning Algorithm for a Smart Home Energy Management System

Walter Angano, P. Musau, C. Wekesa
{"title":"Design and Testing of a Demand Response Q-Learning Algorithm for a Smart Home Energy Management System","authors":"Walter Angano, P. Musau, C. Wekesa","doi":"10.1109/PowerAfrica52236.2021.9543244","DOIUrl":null,"url":null,"abstract":"Growth in energy demand stimulates a need to meet this demand which is achieved either through wired solutions like investment in new or expansion of existing generation, transmission and distribution systems or non-wired solutions like Demand Response (DR). This paper proposes a Q-learning algorithm, an off-policy Reinforcement Learning technique, to implement DR in a residential energy system adopting a static Time of Use (ToU) tariff structure, reduce its learning speed by introducing a knowledge base that updates fuzzy logic rules based on consumer satisfaction feedback and minimize dissatisfaction error. Testing was done in a physical system by deploying the algorithm in Matlab and through serial communication interfacing the physical environment with the Arduino Uno. Load curve generated from appliances and ToU data was used to test the algorithm. The designed algorithm minimized electricity cost by 11 % and improved the learning speed of its agent within 500 episodes.","PeriodicalId":370999,"journal":{"name":"2021 IEEE PES/IAS PowerAfrica","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE PES/IAS PowerAfrica","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PowerAfrica52236.2021.9543244","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Growth in energy demand stimulates a need to meet this demand which is achieved either through wired solutions like investment in new or expansion of existing generation, transmission and distribution systems or non-wired solutions like Demand Response (DR). This paper proposes a Q-learning algorithm, an off-policy Reinforcement Learning technique, to implement DR in a residential energy system adopting a static Time of Use (ToU) tariff structure, reduce its learning speed by introducing a knowledge base that updates fuzzy logic rules based on consumer satisfaction feedback and minimize dissatisfaction error. Testing was done in a physical system by deploying the algorithm in Matlab and through serial communication interfacing the physical environment with the Arduino Uno. Load curve generated from appliances and ToU data was used to test the algorithm. The designed algorithm minimized electricity cost by 11 % and improved the learning speed of its agent within 500 episodes.
智能家居能源管理系统需求响应q -学习算法的设计与测试
能源需求的增长刺激了对满足这种需求的需求,这种需求可以通过有线解决方案(如投资新建或扩建现有的发电、输电和配电系统)或非有线解决方案(如需求响应(DR))来实现。本文提出了一种q -学习算法,即一种非策略强化学习技术,用于在采用静态分时电价结构的住宅能源系统中实现DR,通过引入基于消费者满意度反馈更新模糊逻辑规则的知识库来降低其学习速度,并最小化不满意误差。通过在Matlab中部署算法并通过串行通信将物理环境与Arduino Uno连接在一起,在物理系统中进行了测试。利用用电负荷曲线和分时电价数据对算法进行了验证。所设计的算法将电力成本降低了11%,并在500集内提高了智能体的学习速度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信