Xiaoshun Zhang , Kun Zhang , Zhengxun Guo , Penggen Wang , Penghui Xiong , Mingyu Wang
{"title":"Explainable and generalizable AI for AGC dispatch with heterogeneous generation units: A case study using graph convolutional networks","authors":"Xiaoshun Zhang , Kun Zhang , Zhengxun Guo , Penggen Wang , Penghui Xiong , Mingyu Wang","doi":"10.1016/j.egyai.2025.100621","DOIUrl":null,"url":null,"abstract":"<div><div>Automatic generation control (AGC) dispatch is essential for maintaining frequency stability and power balance in modern grids with high renewable penetration. Conventional optimization and machine learning methods either incur heavy computational costs or act as black-box models, which limits interpretability and generalization in safety–critical operations. To overcome these gaps, we propose an explainable and generalizable framework that integrates graph convolutional networks (GCNs) with Shapley additive explanations (SHAP). SHAP provides quantitative feature attributions, revealing spatiotemporal variability and redundancy, while the derived insights are used to iteratively optimize the GCN adjacency matrix and capture inter-generator dependencies more effectively. This closed-loop design enhances both model transparency and robustness. Case studies on a two-area load frequency control (LFC) system and a provincial power grid in China show consistent improvements: in the LFC model, frequency deviation, power deviation, and ACE are reduced by 14.30%, 58.95%, and 29.22%, respectively; in the provincial grid, ACE overshoot decreases by 99.52%, frequency deviation by 80.67%, and power overshoot is eliminated, with correction distance reduced by up to 55.24%. These results demonstrate that explainability-driven graph learning can significantly improve the reliability and adaptability of AI-based AGC dispatch in complex, heterogeneous power systems.</div></div>","PeriodicalId":34138,"journal":{"name":"Energy and AI","volume":"22 ","pages":"Article 100621"},"PeriodicalIF":9.6000,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Energy and AI","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666546825001533","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Automatic generation control (AGC) dispatch is essential for maintaining frequency stability and power balance in modern grids with high renewable penetration. Conventional optimization and machine learning methods either incur heavy computational costs or act as black-box models, which limits interpretability and generalization in safety–critical operations. To overcome these gaps, we propose an explainable and generalizable framework that integrates graph convolutional networks (GCNs) with Shapley additive explanations (SHAP). SHAP provides quantitative feature attributions, revealing spatiotemporal variability and redundancy, while the derived insights are used to iteratively optimize the GCN adjacency matrix and capture inter-generator dependencies more effectively. This closed-loop design enhances both model transparency and robustness. Case studies on a two-area load frequency control (LFC) system and a provincial power grid in China show consistent improvements: in the LFC model, frequency deviation, power deviation, and ACE are reduced by 14.30%, 58.95%, and 29.22%, respectively; in the provincial grid, ACE overshoot decreases by 99.52%, frequency deviation by 80.67%, and power overshoot is eliminated, with correction distance reduced by up to 55.24%. These results demonstrate that explainability-driven graph learning can significantly improve the reliability and adaptability of AI-based AGC dispatch in complex, heterogeneous power systems.