{"title":"对人工神经网络的破坏性攻击:攻击技术、检测方法和保护策略的系统回顾","authors":"Ahmad Alobaid , Talal Bonny , Maher Alrahhal","doi":"10.1016/j.iswa.2025.200529","DOIUrl":null,"url":null,"abstract":"<div><div>This paper provides a systematic review of disruptive attacks on artificial neural networks (ANNs). As neural networks become increasingly integral to critical applications, their vulnerability to various forms of attack poses significant security challenges. This review categorizes and analyzes recent advancements in attack techniques, detection methods, and protection strategies for ANNs. It explores various attacks, including adversarial attacks, data poisoning, fault injections, membership inference, model inversion, timing, and watermarking attacks, examining their methodologies, limitations, impacts, and potential improvements. Key findings reveal that while detection and protection mechanisms such as adversarial training, noise injection, and hardware-based defenses have advanced significantly, many existing solutions remain vulnerable to adaptive attack strategies and scalability challenges. Additionally, fault injection attacks at the hardware level pose an emerging threat with limited countermeasures. The review identifies critical gaps in defense strategies, particularly in balancing robustness, computational efficiency, and real-world applicability. Future research should focus on scalable defense solutions to ensure effective deployment across diverse ANN architectures and critical applications, such as autonomous systems. Furthermore, integrating emerging technologies, including generative AI models and hybrid architectures, should be prioritized to better understand and mitigate their vulnerabilities.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"26 ","pages":"Article 200529"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Disruptive attacks on artificial neural networks: A systematic review of attack techniques, detection methods, and protection strategies\",\"authors\":\"Ahmad Alobaid , Talal Bonny , Maher Alrahhal\",\"doi\":\"10.1016/j.iswa.2025.200529\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>This paper provides a systematic review of disruptive attacks on artificial neural networks (ANNs). As neural networks become increasingly integral to critical applications, their vulnerability to various forms of attack poses significant security challenges. This review categorizes and analyzes recent advancements in attack techniques, detection methods, and protection strategies for ANNs. It explores various attacks, including adversarial attacks, data poisoning, fault injections, membership inference, model inversion, timing, and watermarking attacks, examining their methodologies, limitations, impacts, and potential improvements. Key findings reveal that while detection and protection mechanisms such as adversarial training, noise injection, and hardware-based defenses have advanced significantly, many existing solutions remain vulnerable to adaptive attack strategies and scalability challenges. Additionally, fault injection attacks at the hardware level pose an emerging threat with limited countermeasures. The review identifies critical gaps in defense strategies, particularly in balancing robustness, computational efficiency, and real-world applicability. Future research should focus on scalable defense solutions to ensure effective deployment across diverse ANN architectures and critical applications, such as autonomous systems. Furthermore, integrating emerging technologies, including generative AI models and hybrid architectures, should be prioritized to better understand and mitigate their vulnerabilities.</div></div>\",\"PeriodicalId\":100684,\"journal\":{\"name\":\"Intelligent Systems with Applications\",\"volume\":\"26 \",\"pages\":\"Article 200529\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-04-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Intelligent Systems with Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2667305325000559\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent Systems with Applications","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667305325000559","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Disruptive attacks on artificial neural networks: A systematic review of attack techniques, detection methods, and protection strategies
This paper provides a systematic review of disruptive attacks on artificial neural networks (ANNs). As neural networks become increasingly integral to critical applications, their vulnerability to various forms of attack poses significant security challenges. This review categorizes and analyzes recent advancements in attack techniques, detection methods, and protection strategies for ANNs. It explores various attacks, including adversarial attacks, data poisoning, fault injections, membership inference, model inversion, timing, and watermarking attacks, examining their methodologies, limitations, impacts, and potential improvements. Key findings reveal that while detection and protection mechanisms such as adversarial training, noise injection, and hardware-based defenses have advanced significantly, many existing solutions remain vulnerable to adaptive attack strategies and scalability challenges. Additionally, fault injection attacks at the hardware level pose an emerging threat with limited countermeasures. The review identifies critical gaps in defense strategies, particularly in balancing robustness, computational efficiency, and real-world applicability. Future research should focus on scalable defense solutions to ensure effective deployment across diverse ANN architectures and critical applications, such as autonomous systems. Furthermore, integrating emerging technologies, including generative AI models and hybrid architectures, should be prioritized to better understand and mitigate their vulnerabilities.