{"title":"能源社区分散式强化学习控制策略的标杆分析","authors":"Niklas Ebell, M. Pruckner","doi":"10.1109/SmartGridComm51999.2021.9632323","DOIUrl":null,"url":null,"abstract":"The energy transition towards a more sustainable, secure and affordable electrical power system consisting of high shares of renewable energy sources increases the energy system's complexity. It creates an energy system in a more decentralized pattern with many more stakeholders involved. In this context, new data-driven operation control strategies play an important role in order to provide fast decision support and a better coordination of electrical assets in the distribution grid. In this paper, we evaluate a novel Multi-Agent Reinforcement Learning approach which focuses on cooperative agents with only local state information and aim to balance the electricity generation and consumption of an energy community consisting of ten households. This approach is compared to a rule-based and an optimal control policy. Results show that independent Q-learner achieve performance 35 % better than rule-based control and compensate high computational effort with adaptability, simplicity in communication requirements and respect of data-privacy.","PeriodicalId":378884,"journal":{"name":"2021 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Benchmarking a Decentralized Reinforcement Learning Control Strategy for an Energy Community\",\"authors\":\"Niklas Ebell, M. Pruckner\",\"doi\":\"10.1109/SmartGridComm51999.2021.9632323\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The energy transition towards a more sustainable, secure and affordable electrical power system consisting of high shares of renewable energy sources increases the energy system's complexity. It creates an energy system in a more decentralized pattern with many more stakeholders involved. In this context, new data-driven operation control strategies play an important role in order to provide fast decision support and a better coordination of electrical assets in the distribution grid. In this paper, we evaluate a novel Multi-Agent Reinforcement Learning approach which focuses on cooperative agents with only local state information and aim to balance the electricity generation and consumption of an energy community consisting of ten households. This approach is compared to a rule-based and an optimal control policy. Results show that independent Q-learner achieve performance 35 % better than rule-based control and compensate high computational effort with adaptability, simplicity in communication requirements and respect of data-privacy.\",\"PeriodicalId\":378884,\"journal\":{\"name\":\"2021 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm)\",\"volume\":\"26 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SmartGridComm51999.2021.9632323\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SmartGridComm51999.2021.9632323","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Benchmarking a Decentralized Reinforcement Learning Control Strategy for an Energy Community
The energy transition towards a more sustainable, secure and affordable electrical power system consisting of high shares of renewable energy sources increases the energy system's complexity. It creates an energy system in a more decentralized pattern with many more stakeholders involved. In this context, new data-driven operation control strategies play an important role in order to provide fast decision support and a better coordination of electrical assets in the distribution grid. In this paper, we evaluate a novel Multi-Agent Reinforcement Learning approach which focuses on cooperative agents with only local state information and aim to balance the electricity generation and consumption of an energy community consisting of ten households. This approach is compared to a rule-based and an optimal control policy. Results show that independent Q-learner achieve performance 35 % better than rule-based control and compensate high computational effort with adaptability, simplicity in communication requirements and respect of data-privacy.