Bangji Fan;Xinghua Liu;Gaoxi Xiao;Yan Xu;Xiang Yang;Peng Wang
{"title":"一种基于记忆的图强化学习方法,用于在分布式能源资源不确定的情况下恢复关键负荷","authors":"Bangji Fan;Xinghua Liu;Gaoxi Xiao;Yan Xu;Xiang Yang;Peng Wang","doi":"10.1109/TSG.2024.3482696","DOIUrl":null,"url":null,"abstract":"The integration of distributed energy resources into distribution networks, marked by its inherent uncertainties, presents a substantial challenge for devising load restoration strategies. To tackle this challenge, we develop a memory-based graph reinforcement learning approach, designed to train the agent to acquire a critical load restoration strategy in a distribution network under uncertainties. Specifically, the restoration problem under uncertainties is formulated as a novel partially observable Markov decision process, where a multimodal feature-based observation space is proposed. This space includes graph-structured data of the environment and memory information of the agent. The graph-structured data contain potential features of the current observation, thus enhancing the observable domain, while the memory information incorporates temporal correlations between sample sequences to address the partial observability of the environment. Based on the proposed Markov process, we put forth a maximum entropy-based recurrent graph soft actor-critic algorithm to train the agent in partially observable environments through a recursive structure, where entropy regularization is utilized to facilitate a more extensive exploration of possibilities in a state space with high uncertainties. The performance of the proposed approach is validated by a comparative study versus existing results on the IEEE 123-bus system containing wind power and photovoltaic sources.","PeriodicalId":13331,"journal":{"name":"IEEE Transactions on Smart Grid","volume":"16 2","pages":"1706-1718"},"PeriodicalIF":8.6000,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Memory-Based Graph Reinforcement Learning Method for Critical Load Restoration With Uncertainties of Distributed Energy Resource\",\"authors\":\"Bangji Fan;Xinghua Liu;Gaoxi Xiao;Yan Xu;Xiang Yang;Peng Wang\",\"doi\":\"10.1109/TSG.2024.3482696\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The integration of distributed energy resources into distribution networks, marked by its inherent uncertainties, presents a substantial challenge for devising load restoration strategies. To tackle this challenge, we develop a memory-based graph reinforcement learning approach, designed to train the agent to acquire a critical load restoration strategy in a distribution network under uncertainties. Specifically, the restoration problem under uncertainties is formulated as a novel partially observable Markov decision process, where a multimodal feature-based observation space is proposed. This space includes graph-structured data of the environment and memory information of the agent. The graph-structured data contain potential features of the current observation, thus enhancing the observable domain, while the memory information incorporates temporal correlations between sample sequences to address the partial observability of the environment. Based on the proposed Markov process, we put forth a maximum entropy-based recurrent graph soft actor-critic algorithm to train the agent in partially observable environments through a recursive structure, where entropy regularization is utilized to facilitate a more extensive exploration of possibilities in a state space with high uncertainties. The performance of the proposed approach is validated by a comparative study versus existing results on the IEEE 123-bus system containing wind power and photovoltaic sources.\",\"PeriodicalId\":13331,\"journal\":{\"name\":\"IEEE Transactions on Smart Grid\",\"volume\":\"16 2\",\"pages\":\"1706-1718\"},\"PeriodicalIF\":8.6000,\"publicationDate\":\"2024-10-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Smart Grid\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10720914/\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Smart Grid","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10720914/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
A Memory-Based Graph Reinforcement Learning Method for Critical Load Restoration With Uncertainties of Distributed Energy Resource
The integration of distributed energy resources into distribution networks, marked by its inherent uncertainties, presents a substantial challenge for devising load restoration strategies. To tackle this challenge, we develop a memory-based graph reinforcement learning approach, designed to train the agent to acquire a critical load restoration strategy in a distribution network under uncertainties. Specifically, the restoration problem under uncertainties is formulated as a novel partially observable Markov decision process, where a multimodal feature-based observation space is proposed. This space includes graph-structured data of the environment and memory information of the agent. The graph-structured data contain potential features of the current observation, thus enhancing the observable domain, while the memory information incorporates temporal correlations between sample sequences to address the partial observability of the environment. Based on the proposed Markov process, we put forth a maximum entropy-based recurrent graph soft actor-critic algorithm to train the agent in partially observable environments through a recursive structure, where entropy regularization is utilized to facilitate a more extensive exploration of possibilities in a state space with high uncertainties. The performance of the proposed approach is validated by a comparative study versus existing results on the IEEE 123-bus system containing wind power and photovoltaic sources.
期刊介绍:
The IEEE Transactions on Smart Grid is a multidisciplinary journal that focuses on research and development in the field of smart grid technology. It covers various aspects of the smart grid, including energy networks, prosumers (consumers who also produce energy), electric transportation, distributed energy resources, and communications. The journal also addresses the integration of microgrids and active distribution networks with transmission systems. It publishes original research on smart grid theories and principles, including technologies and systems for demand response, Advance Metering Infrastructure, cyber-physical systems, multi-energy systems, transactive energy, data analytics, and electric vehicle integration. Additionally, the journal considers surveys of existing work on the smart grid that propose new perspectives on the history and future of intelligent and active grids.