一种同时具备对抗性攻击取证和识别准确性的对抗性样本恢复方法

IF 4.8 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Zigang Chen , Zhen Wang , Yuening Zhou , Fan Liu , Yuhong Liu , Tao Leng , Haihua Zhu
{"title":"一种同时具备对抗性攻击取证和识别准确性的对抗性样本恢复方法","authors":"Zigang Chen ,&nbsp;Zhen Wang ,&nbsp;Yuening Zhou ,&nbsp;Fan Liu ,&nbsp;Yuhong Liu ,&nbsp;Tao Leng ,&nbsp;Haihua Zhu","doi":"10.1016/j.cose.2024.103987","DOIUrl":null,"url":null,"abstract":"<div><p>Adversarial samples deceive machine learning models through small but elaborate modifications that lead to erroneous outputs. The severity of the adversarial sample problem has come to the forefront with the widespread use of machine learning in areas such as security systems, autonomous driving, speech recognition, finance, and medical diagnostics. Malicious attackers can use adversarial samples to circumvent security detection systems, interfere with autonomous driving perception, mislead speech recognition, defraud financial systems, and even cause medical diagnosis errors. The emergence of adversarial samples exposes the vulnerability of existing models and poses challenges for information tracing and forensics after the incident. The main goal of current adversarial sample restoration methods is to improve model robustness. Traditional approaches focus only on improving the model’s classification accuracy, ignoring the importance of adversarial information, which is crucial for understanding the attack mechanism and strengthening future defenses. To address this issue, we propose an adversarial sample restoration method based on the similarity between clean and adversarial sample blocks to balance the needs of adversarial forensics and recognition accuracy. We implement the Fast Gradient Sign Method (FGSM), Basic Iterative Method (BIM), and Momentum Iterative Attack (MIA) attacks on MNIST, F-MNIST, and EMNIST datasets and perform experimental validation. The results demonstrate that our restoration method significantly enhances the model’s classification accuracy across various datasets and attack scenarios. Comparative analysis shows that the restored samples maintain a high similarity with the original adversarial samples, proving the method’s effectiveness. In addition, we performed performance tests on pre- and post-recovery samples. Taking the MNIST dataset as an example, we observed that the model performance metrics, such as MAPE, MAE, RMSE, and VAPE, of the restored samples improved by 88%, 88%, 65%, and 82%, respectively, after using the FGSM attack. This indicates that our restoration method successfully preserves the information of the generation mechanism of the adversarial samples and improves the model’s performance. This approach balances forensic capability and prediction accuracy, demonstrates a new direction in adversarial sample research, and substantially impacts security defense in practical applications.</p></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":null,"pages":null},"PeriodicalIF":4.8000,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A method for recovering adversarial samples with both adversarial attack forensics and recognition accuracy\",\"authors\":\"Zigang Chen ,&nbsp;Zhen Wang ,&nbsp;Yuening Zhou ,&nbsp;Fan Liu ,&nbsp;Yuhong Liu ,&nbsp;Tao Leng ,&nbsp;Haihua Zhu\",\"doi\":\"10.1016/j.cose.2024.103987\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Adversarial samples deceive machine learning models through small but elaborate modifications that lead to erroneous outputs. The severity of the adversarial sample problem has come to the forefront with the widespread use of machine learning in areas such as security systems, autonomous driving, speech recognition, finance, and medical diagnostics. Malicious attackers can use adversarial samples to circumvent security detection systems, interfere with autonomous driving perception, mislead speech recognition, defraud financial systems, and even cause medical diagnosis errors. The emergence of adversarial samples exposes the vulnerability of existing models and poses challenges for information tracing and forensics after the incident. The main goal of current adversarial sample restoration methods is to improve model robustness. Traditional approaches focus only on improving the model’s classification accuracy, ignoring the importance of adversarial information, which is crucial for understanding the attack mechanism and strengthening future defenses. To address this issue, we propose an adversarial sample restoration method based on the similarity between clean and adversarial sample blocks to balance the needs of adversarial forensics and recognition accuracy. We implement the Fast Gradient Sign Method (FGSM), Basic Iterative Method (BIM), and Momentum Iterative Attack (MIA) attacks on MNIST, F-MNIST, and EMNIST datasets and perform experimental validation. The results demonstrate that our restoration method significantly enhances the model’s classification accuracy across various datasets and attack scenarios. Comparative analysis shows that the restored samples maintain a high similarity with the original adversarial samples, proving the method’s effectiveness. In addition, we performed performance tests on pre- and post-recovery samples. Taking the MNIST dataset as an example, we observed that the model performance metrics, such as MAPE, MAE, RMSE, and VAPE, of the restored samples improved by 88%, 88%, 65%, and 82%, respectively, after using the FGSM attack. This indicates that our restoration method successfully preserves the information of the generation mechanism of the adversarial samples and improves the model’s performance. This approach balances forensic capability and prediction accuracy, demonstrates a new direction in adversarial sample research, and substantially impacts security defense in practical applications.</p></div>\",\"PeriodicalId\":51004,\"journal\":{\"name\":\"Computers & Security\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.8000,\"publicationDate\":\"2024-07-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers & Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S016740482400292X\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Security","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S016740482400292X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

对抗样本通过小而精细的修改欺骗机器学习模型,导致错误输出。随着机器学习在安全系统、自动驾驶、语音识别、金融和医疗诊断等领域的广泛应用,对抗样本问题的严重性凸显出来。恶意攻击者可以利用对抗样本规避安全检测系统、干扰自动驾驶感知、误导语音识别、欺诈金融系统,甚至造成医疗诊断错误。对抗样本的出现暴露了现有模型的脆弱性,并对事件发生后的信息追踪和取证提出了挑战。当前对抗样本修复方法的主要目标是提高模型的鲁棒性。传统方法只注重提高模型的分类准确性,忽视了对抗信息的重要性,而对抗信息对于理解攻击机制和加强未来防御至关重要。针对这一问题,我们提出了一种基于干净样本块和对抗样本块相似性的对抗样本还原方法,以平衡对抗取证和识别准确性的需求。我们在 MNIST、F-MNIST 和 EMNIST 数据集上实现了快速梯度符号法(FGSM)、基本迭代法(BIM)和动量迭代攻击(MIA),并进行了实验验证。结果表明,我们的还原方法能显著提高模型在不同数据集和攻击场景下的分类准确性。对比分析表明,修复后的样本与原始对抗样本保持了很高的相似度,证明了该方法的有效性。此外,我们还对恢复前和恢复后的样本进行了性能测试。以 MNIST 数据集为例,我们观察到在使用 FGSM 攻击后,恢复样本的模型性能指标(如 MAPE、MAE、RMSE 和 VAPE)分别提高了 88%、88%、65% 和 82%。这表明我们的还原方法成功地保留了对抗样本的生成机制信息,提高了模型的性能。这种方法兼顾了取证能力和预测精度,展示了对抗样本研究的新方向,并对实际应用中的安全防御产生了实质性影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A method for recovering adversarial samples with both adversarial attack forensics and recognition accuracy

Adversarial samples deceive machine learning models through small but elaborate modifications that lead to erroneous outputs. The severity of the adversarial sample problem has come to the forefront with the widespread use of machine learning in areas such as security systems, autonomous driving, speech recognition, finance, and medical diagnostics. Malicious attackers can use adversarial samples to circumvent security detection systems, interfere with autonomous driving perception, mislead speech recognition, defraud financial systems, and even cause medical diagnosis errors. The emergence of adversarial samples exposes the vulnerability of existing models and poses challenges for information tracing and forensics after the incident. The main goal of current adversarial sample restoration methods is to improve model robustness. Traditional approaches focus only on improving the model’s classification accuracy, ignoring the importance of adversarial information, which is crucial for understanding the attack mechanism and strengthening future defenses. To address this issue, we propose an adversarial sample restoration method based on the similarity between clean and adversarial sample blocks to balance the needs of adversarial forensics and recognition accuracy. We implement the Fast Gradient Sign Method (FGSM), Basic Iterative Method (BIM), and Momentum Iterative Attack (MIA) attacks on MNIST, F-MNIST, and EMNIST datasets and perform experimental validation. The results demonstrate that our restoration method significantly enhances the model’s classification accuracy across various datasets and attack scenarios. Comparative analysis shows that the restored samples maintain a high similarity with the original adversarial samples, proving the method’s effectiveness. In addition, we performed performance tests on pre- and post-recovery samples. Taking the MNIST dataset as an example, we observed that the model performance metrics, such as MAPE, MAE, RMSE, and VAPE, of the restored samples improved by 88%, 88%, 65%, and 82%, respectively, after using the FGSM attack. This indicates that our restoration method successfully preserves the information of the generation mechanism of the adversarial samples and improves the model’s performance. This approach balances forensic capability and prediction accuracy, demonstrates a new direction in adversarial sample research, and substantially impacts security defense in practical applications.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computers & Security
Computers & Security 工程技术-计算机:信息系统
CiteScore
12.40
自引率
7.10%
发文量
365
审稿时长
10.7 months
期刊介绍: Computers & Security is the most respected technical journal in the IT security field. With its high-profile editorial board and informative regular features and columns, the journal is essential reading for IT security professionals around the world. Computers & Security provides you with a unique blend of leading edge research and sound practical management advice. It is aimed at the professional involved with computer security, audit, control and data integrity in all sectors - industry, commerce and academia. Recognized worldwide as THE primary source of reference for applied research and technical expertise it is your first step to fully secure systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信