Deep Recurrent Learning versus Q-Learning for Energy Management Systems in Next Generation Network

Aicha Dridi, Chérifa Boucetta, Hassine Moungla, H. Afifi
{"title":"Deep Recurrent Learning versus Q-Learning for Energy Management Systems in Next Generation Network","authors":"Aicha Dridi, Chérifa Boucetta, Hassine Moungla, H. Afifi","doi":"10.1109/GLOBECOM46510.2021.9685620","DOIUrl":null,"url":null,"abstract":"An AI based energy management system (EMS) for microgrids is proposed. It is composed of three modules: a strategy based module, a deep learning (DL) and a reinforcement learning module (RL). This framework determines heuristically the optimal actions for the microgrid system under different time-dependent environmental conditions. In essence, a main innovation is applied to the EMS. Our deep learning algorithm uses recurrent neural networks (RNNs) instead of the habitual State Action Reward (SAR) approach (whether classical or deep). Learning is hence guided by successful actions rather than by blind exploration. A large improvement in learning rates is hence observed when compared to classical Q-learning on real datasets that present a large diversity in energy consumption profiles, acquired in French premises over a long period. It leads to question about the best appropriate reinforcement policies to adopt when solving large state environments.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Global Communications Conference (GLOBECOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GLOBECOM46510.2021.9685620","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

An AI based energy management system (EMS) for microgrids is proposed. It is composed of three modules: a strategy based module, a deep learning (DL) and a reinforcement learning module (RL). This framework determines heuristically the optimal actions for the microgrid system under different time-dependent environmental conditions. In essence, a main innovation is applied to the EMS. Our deep learning algorithm uses recurrent neural networks (RNNs) instead of the habitual State Action Reward (SAR) approach (whether classical or deep). Learning is hence guided by successful actions rather than by blind exploration. A large improvement in learning rates is hence observed when compared to classical Q-learning on real datasets that present a large diversity in energy consumption profiles, acquired in French premises over a long period. It leads to question about the best appropriate reinforcement policies to adopt when solving large state environments.
深度循环学习与q学习在下一代网络能源管理系统中的应用
提出了一种基于人工智能的微电网能源管理系统(EMS)。它由三个模块组成:基于策略的模块,深度学习(DL)和强化学习模块(RL)。该框架启发式地确定了微电网系统在不同时变环境条件下的最优行为。从本质上讲,EMS应用了一个主要的创新。我们的深度学习算法使用循环神经网络(rnn)代替习惯状态行动奖励(SAR)方法(无论是经典的还是深度的)。因此,指导学习的是成功的行动,而不是盲目的探索。因此,与在真实数据集上进行的经典q -学习相比,学习率有了很大的提高,这些数据集呈现出能源消耗曲线的巨大差异,这些数据集是在法国长期获得的。这导致了在解决大型状态环境时应采用的最佳适当强化策略的问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信