Disruptive attacks on artificial neural networks: A systematic review of attack techniques, detection methods, and protection strategies

Ahmad Alobaid , Talal Bonny , Maher Alrahhal
{"title":"Disruptive attacks on artificial neural networks: A systematic review of attack techniques, detection methods, and protection strategies","authors":"Ahmad Alobaid ,&nbsp;Talal Bonny ,&nbsp;Maher Alrahhal","doi":"10.1016/j.iswa.2025.200529","DOIUrl":null,"url":null,"abstract":"<div><div>This paper provides a systematic review of disruptive attacks on artificial neural networks (ANNs). As neural networks become increasingly integral to critical applications, their vulnerability to various forms of attack poses significant security challenges. This review categorizes and analyzes recent advancements in attack techniques, detection methods, and protection strategies for ANNs. It explores various attacks, including adversarial attacks, data poisoning, fault injections, membership inference, model inversion, timing, and watermarking attacks, examining their methodologies, limitations, impacts, and potential improvements. Key findings reveal that while detection and protection mechanisms such as adversarial training, noise injection, and hardware-based defenses have advanced significantly, many existing solutions remain vulnerable to adaptive attack strategies and scalability challenges. Additionally, fault injection attacks at the hardware level pose an emerging threat with limited countermeasures. The review identifies critical gaps in defense strategies, particularly in balancing robustness, computational efficiency, and real-world applicability. Future research should focus on scalable defense solutions to ensure effective deployment across diverse ANN architectures and critical applications, such as autonomous systems. Furthermore, integrating emerging technologies, including generative AI models and hybrid architectures, should be prioritized to better understand and mitigate their vulnerabilities.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"26 ","pages":"Article 200529"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent Systems with Applications","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667305325000559","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper provides a systematic review of disruptive attacks on artificial neural networks (ANNs). As neural networks become increasingly integral to critical applications, their vulnerability to various forms of attack poses significant security challenges. This review categorizes and analyzes recent advancements in attack techniques, detection methods, and protection strategies for ANNs. It explores various attacks, including adversarial attacks, data poisoning, fault injections, membership inference, model inversion, timing, and watermarking attacks, examining their methodologies, limitations, impacts, and potential improvements. Key findings reveal that while detection and protection mechanisms such as adversarial training, noise injection, and hardware-based defenses have advanced significantly, many existing solutions remain vulnerable to adaptive attack strategies and scalability challenges. Additionally, fault injection attacks at the hardware level pose an emerging threat with limited countermeasures. The review identifies critical gaps in defense strategies, particularly in balancing robustness, computational efficiency, and real-world applicability. Future research should focus on scalable defense solutions to ensure effective deployment across diverse ANN architectures and critical applications, such as autonomous systems. Furthermore, integrating emerging technologies, including generative AI models and hybrid architectures, should be prioritized to better understand and mitigate their vulnerabilities.
对人工神经网络的破坏性攻击:攻击技术、检测方法和保护策略的系统回顾
本文对人工神经网络的破坏性攻击进行了系统的综述。随着神经网络越来越成为关键应用程序的一部分,它们对各种形式攻击的脆弱性构成了重大的安全挑战。本文对人工神经网络攻击技术、检测方法和保护策略的最新进展进行了分类和分析。它探讨了各种攻击,包括对抗性攻击、数据中毒、故障注入、成员推理、模型反演、定时和水印攻击,并研究了它们的方法、局限性、影响和潜在的改进。主要研究结果显示,尽管对抗性训练、噪声注入和基于硬件的防御等检测和保护机制取得了显著进展,但许多现有解决方案仍然容易受到自适应攻击策略和可扩展性挑战的影响。此外,硬件级别的故障注入攻击是一种新兴的威胁,应对措施有限。该审查确定了防御战略的关键差距,特别是在平衡鲁棒性,计算效率和现实世界的适用性方面。未来的研究应侧重于可扩展的防御解决方案,以确保在各种人工神经网络架构和关键应用(如自治系统)之间有效部署。此外,应优先整合新兴技术,包括生成式人工智能模型和混合架构,以更好地了解和减轻其漏洞。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
5.60
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信