Deep reinforcement learning based dynamic pricing for demand response considering market and supply constraints

IF 5.4 Q2 ENERGY & FUELS
Alejandro Fraija , Nilson Henao , Kodjo Agbossou , Sousso Kelouwani , Michaël Fournier , Shaival Hemant Nagarsheth
{"title":"Deep reinforcement learning based dynamic pricing for demand response considering market and supply constraints","authors":"Alejandro Fraija ,&nbsp;Nilson Henao ,&nbsp;Kodjo Agbossou ,&nbsp;Sousso Kelouwani ,&nbsp;Michaël Fournier ,&nbsp;Shaival Hemant Nagarsheth","doi":"10.1016/j.segy.2024.100139","DOIUrl":null,"url":null,"abstract":"<div><p>This paper presents a Reinforcement Learning (RL) approach to a price-based Demand Response (DR) program. The proposed framework manages a dynamic pricing scheme considering constraints from the supply and market side. Under these constraints, a DR Aggregator (DRA) is designed that takes advantage of a price generator function to establish a desirable power capacity through a coordination loop. Subsequently, a multi-agent system is suggested to exploit the flexibility potential of the residential sector to modify consumption patterns utilizing the relevant price policy. Specifically, electrical space heaters as flexible loads are employed to cope with the created policy by reducing energy costs while maintaining customers' comfort preferences. In addition, the developed mechanism is capable of dealing with deviations from the optimal consumption plan determined by residential agents at the beginning of the day. The DRA applies an RL method to handle such occurrences while maximizing its profits by adjusting the parameters of the price generator function at each iteration. A comparative study is also carried out for the proposed price-based DR and the RL-based DRA. The results demonstrate the efficiency of the suggested DR program to offer a power capacity that can maximize the profit of the aggregator and meet the needs of residential agents while preserving the constraints of the system.</p></div>","PeriodicalId":34738,"journal":{"name":"Smart Energy","volume":"14 ","pages":"Article 100139"},"PeriodicalIF":5.4000,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666955224000091/pdfft?md5=1d534f2342596c403bc6386d5fedd0aa&pid=1-s2.0-S2666955224000091-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Smart Energy","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666955224000091","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENERGY & FUELS","Score":null,"Total":0}
引用次数: 0

Abstract

This paper presents a Reinforcement Learning (RL) approach to a price-based Demand Response (DR) program. The proposed framework manages a dynamic pricing scheme considering constraints from the supply and market side. Under these constraints, a DR Aggregator (DRA) is designed that takes advantage of a price generator function to establish a desirable power capacity through a coordination loop. Subsequently, a multi-agent system is suggested to exploit the flexibility potential of the residential sector to modify consumption patterns utilizing the relevant price policy. Specifically, electrical space heaters as flexible loads are employed to cope with the created policy by reducing energy costs while maintaining customers' comfort preferences. In addition, the developed mechanism is capable of dealing with deviations from the optimal consumption plan determined by residential agents at the beginning of the day. The DRA applies an RL method to handle such occurrences while maximizing its profits by adjusting the parameters of the price generator function at each iteration. A comparative study is also carried out for the proposed price-based DR and the RL-based DRA. The results demonstrate the efficiency of the suggested DR program to offer a power capacity that can maximize the profit of the aggregator and meet the needs of residential agents while preserving the constraints of the system.

Abstract Image

基于深度强化学习的需求响应动态定价,考虑市场和供应限制因素
本文介绍了一种基于价格的需求响应(DR)计划的强化学习(RL)方法。考虑到供应方和市场方的制约因素,所提出的框架可管理动态定价方案。在这些约束条件下,设计了一个需求响应聚合器 (DRA),利用价格生成函数,通过协调环路建立理想的电力容量。随后,建议采用多代理系统来利用住宅部门的灵活性潜力,利用相关价格政策来改变消费模式。具体来说,电空间加热器作为灵活负载,可通过降低能源成本,同时保持客户的舒适偏好来应对所制定的政策。此外,所开发的机制还能处理偏离住宅代理在一天开始时确定的最佳消费计划的情况。DRA 采用 RL 方法来处理这种情况,同时通过每次迭代调整价格生成函数的参数来实现利润最大化。还对建议的基于价格的 DR 和基于 RL 的 DRA 进行了比较研究。研究结果表明,建议的 DR 方案能有效提供电力容量,既能使聚合器的利润最大化,又能满足居民代理的需求,同时还能保持系统的约束条件。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Smart Energy
Smart Energy Engineering-Mechanical Engineering
CiteScore
9.20
自引率
0.00%
发文量
29
审稿时长
73 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信