Rong Huang;Yuancheng Li;Peidong Yin;Xingyu Shang;Yuanyuan Wang
{"title":"电力系统数据驱动模型的可转移注意力分散对抗攻击","authors":"Rong Huang;Yuancheng Li;Peidong Yin;Xingyu Shang;Yuanyuan Wang","doi":"10.1109/TIFS.2025.3565993","DOIUrl":null,"url":null,"abstract":"As the digitalization of power systems progresses, data-driven models have garnered widespread attention due to their performance advantages, leading to the emergence of numerous data-driven intelligent models for power tasks, such as attack detection and stability assessment. However, data-driven models are susceptible to adversarial attacks, even when deployed in highly secure control centers. Considering the similarity in the semantic features extracted by structurally diverse data-driven models when addressing the same downstream tasks, this paper proposes a transferable attention-distracting adversarial attack tailored for power systems. This attack first introduces an adversarial perturbation selection framework with physical constraints specific to power systems. It also offers different loss functions to distract attention and strategies to weaken the significance of features. Simulation experiments confirm that distracting the model’s attention results in more stable transferable attack effects and significantly reduces the performance of data-driven models across different task scenarios. The experimental results underscore the importance of not neglecting the security and robustness of models in security-critical scenarios like power systems, even while achieving optimal performance.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"4985-4998"},"PeriodicalIF":8.0000,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Transferable Attention-Distracting Adversarial Attack on Data-Driven Models for Power Systems\",\"authors\":\"Rong Huang;Yuancheng Li;Peidong Yin;Xingyu Shang;Yuanyuan Wang\",\"doi\":\"10.1109/TIFS.2025.3565993\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As the digitalization of power systems progresses, data-driven models have garnered widespread attention due to their performance advantages, leading to the emergence of numerous data-driven intelligent models for power tasks, such as attack detection and stability assessment. However, data-driven models are susceptible to adversarial attacks, even when deployed in highly secure control centers. Considering the similarity in the semantic features extracted by structurally diverse data-driven models when addressing the same downstream tasks, this paper proposes a transferable attention-distracting adversarial attack tailored for power systems. This attack first introduces an adversarial perturbation selection framework with physical constraints specific to power systems. It also offers different loss functions to distract attention and strategies to weaken the significance of features. Simulation experiments confirm that distracting the model’s attention results in more stable transferable attack effects and significantly reduces the performance of data-driven models across different task scenarios. The experimental results underscore the importance of not neglecting the security and robustness of models in security-critical scenarios like power systems, even while achieving optimal performance.\",\"PeriodicalId\":13492,\"journal\":{\"name\":\"IEEE Transactions on Information Forensics and Security\",\"volume\":\"20 \",\"pages\":\"4985-4998\"},\"PeriodicalIF\":8.0000,\"publicationDate\":\"2025-03-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Information Forensics and Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10992256/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10992256/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
Transferable Attention-Distracting Adversarial Attack on Data-Driven Models for Power Systems
As the digitalization of power systems progresses, data-driven models have garnered widespread attention due to their performance advantages, leading to the emergence of numerous data-driven intelligent models for power tasks, such as attack detection and stability assessment. However, data-driven models are susceptible to adversarial attacks, even when deployed in highly secure control centers. Considering the similarity in the semantic features extracted by structurally diverse data-driven models when addressing the same downstream tasks, this paper proposes a transferable attention-distracting adversarial attack tailored for power systems. This attack first introduces an adversarial perturbation selection framework with physical constraints specific to power systems. It also offers different loss functions to distract attention and strategies to weaken the significance of features. Simulation experiments confirm that distracting the model’s attention results in more stable transferable attack effects and significantly reduces the performance of data-driven models across different task scenarios. The experimental results underscore the importance of not neglecting the security and robustness of models in security-critical scenarios like power systems, even while achieving optimal performance.
期刊介绍:
The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features