对抗性攻击下数据驱动的电力系统动态安全评估:基于风险预警的解释分析与缓解

IF 1.6 Q4 ENERGY & FUELS
Zhebin Chen, Chao Ren, Yan Xu, Zhao Yang Dong, Qiaoqiao Li
{"title":"对抗性攻击下数据驱动的电力系统动态安全评估:基于风险预警的解释分析与缓解","authors":"Zhebin Chen,&nbsp;Chao Ren,&nbsp;Yan Xu,&nbsp;Zhao Yang Dong,&nbsp;Qiaoqiao Li","doi":"10.1049/esi2.12118","DOIUrl":null,"url":null,"abstract":"<p>Power system dynamic security assessment (DSA) has long been essential for protecting the system from the risk of cascading failures and wide-spread blackouts. The machine learning (ML) based data-driven strategy is promising due to its real-time computation speed and knowledge discovery capacity. However, ML algorithms are found to be vulnerable against well-designed malicious input samples that can lead to wrong outputs. Adversarial attacks are implemented to measure the vulnerability of the trained ML models. Specifically, the targets of attacks are identified by interpretation analysis that the data features with large SHAP values will be assigned with perturbations. The proposed method has the superiority that an instance-based DSA method is established with interpretation of the ML models, where effective adversarial attacks and its mitigation countermeasure are developed by assigning the perturbations on features with high importance. Later, these generated adversarial examples are employed for adversarial training and mitigation. The simulation results present that the model accuracy and robustness vary with the quantity of adversarial examples used, and there is not necessarily a trade-off between these two indicators. Furthermore, the rate of successful attacks increases when a greater bound of perturbations is permitted. By this method, the correlation between model accuracy and robustness can be clearly stated, which will provide considerable assistance in decision making.</p>","PeriodicalId":33288,"journal":{"name":"IET Energy Systems Integration","volume":"6 1","pages":"62-72"},"PeriodicalIF":1.6000,"publicationDate":"2023-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/esi2.12118","citationCount":"0","resultStr":"{\"title\":\"Data-driven power system dynamic security assessment under adversarial attacks: Risk warning based interpretation analysis and mitigation\",\"authors\":\"Zhebin Chen,&nbsp;Chao Ren,&nbsp;Yan Xu,&nbsp;Zhao Yang Dong,&nbsp;Qiaoqiao Li\",\"doi\":\"10.1049/esi2.12118\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Power system dynamic security assessment (DSA) has long been essential for protecting the system from the risk of cascading failures and wide-spread blackouts. The machine learning (ML) based data-driven strategy is promising due to its real-time computation speed and knowledge discovery capacity. However, ML algorithms are found to be vulnerable against well-designed malicious input samples that can lead to wrong outputs. Adversarial attacks are implemented to measure the vulnerability of the trained ML models. Specifically, the targets of attacks are identified by interpretation analysis that the data features with large SHAP values will be assigned with perturbations. The proposed method has the superiority that an instance-based DSA method is established with interpretation of the ML models, where effective adversarial attacks and its mitigation countermeasure are developed by assigning the perturbations on features with high importance. Later, these generated adversarial examples are employed for adversarial training and mitigation. The simulation results present that the model accuracy and robustness vary with the quantity of adversarial examples used, and there is not necessarily a trade-off between these two indicators. Furthermore, the rate of successful attacks increases when a greater bound of perturbations is permitted. By this method, the correlation between model accuracy and robustness can be clearly stated, which will provide considerable assistance in decision making.</p>\",\"PeriodicalId\":33288,\"journal\":{\"name\":\"IET Energy Systems Integration\",\"volume\":\"6 1\",\"pages\":\"62-72\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2023-10-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1049/esi2.12118\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IET Energy Systems Integration\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/esi2.12118\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ENERGY & FUELS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Energy Systems Integration","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/esi2.12118","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENERGY & FUELS","Score":null,"Total":0}
引用次数: 0

摘要

长期以来,电力系统动态安全评估(DSA)对于保护系统免受连锁故障和大面积停电风险至关重要。基于机器学习(ML)的数据驱动策略因其实时计算速度和知识发现能力而大有可为。然而,人们发现 ML 算法容易受到精心设计的恶意输入样本的影响,从而导致错误的输出。为了衡量训练有素的 ML 模型的脆弱性,我们实施了对抗性攻击。具体来说,攻击目标是通过解释分析确定的,即具有较大 SHAP 值的数据特征将被赋予扰动。所提方法的优越性在于,通过对 ML 模型的解释,建立了基于实例的 DSA 方法,并通过对高重要性特征分配扰动,开发了有效的对抗性攻击及其缓解对策。随后,这些生成的对抗实例被用于对抗训练和缓解。模拟结果表明,模型的准确性和鲁棒性随使用的对抗示例数量而变化,这两个指标之间并不一定存在权衡。此外,当允许的扰动范围越大,攻击成功率就越高。通过这种方法,可以清楚地说明模型准确性和鲁棒性之间的相关性,这将为决策提供很大的帮助。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Data-driven power system dynamic security assessment under adversarial attacks: Risk warning based interpretation analysis and mitigation

Data-driven power system dynamic security assessment under adversarial attacks: Risk warning based interpretation analysis and mitigation

Power system dynamic security assessment (DSA) has long been essential for protecting the system from the risk of cascading failures and wide-spread blackouts. The machine learning (ML) based data-driven strategy is promising due to its real-time computation speed and knowledge discovery capacity. However, ML algorithms are found to be vulnerable against well-designed malicious input samples that can lead to wrong outputs. Adversarial attacks are implemented to measure the vulnerability of the trained ML models. Specifically, the targets of attacks are identified by interpretation analysis that the data features with large SHAP values will be assigned with perturbations. The proposed method has the superiority that an instance-based DSA method is established with interpretation of the ML models, where effective adversarial attacks and its mitigation countermeasure are developed by assigning the perturbations on features with high importance. Later, these generated adversarial examples are employed for adversarial training and mitigation. The simulation results present that the model accuracy and robustness vary with the quantity of adversarial examples used, and there is not necessarily a trade-off between these two indicators. Furthermore, the rate of successful attacks increases when a greater bound of perturbations is permitted. By this method, the correlation between model accuracy and robustness can be clearly stated, which will provide considerable assistance in decision making.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IET Energy Systems Integration
IET Energy Systems Integration Engineering-Engineering (miscellaneous)
CiteScore
5.90
自引率
8.30%
发文量
29
审稿时长
11 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信