揭示基于深度学习的恶意软件检测中的漏洞:差异隐私驱动的对抗性攻击

IF 4.8 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
{"title":"揭示基于深度学习的恶意软件检测中的漏洞:差异隐私驱动的对抗性攻击","authors":"","doi":"10.1016/j.cose.2024.104035","DOIUrl":null,"url":null,"abstract":"<div><p>The exponential increase of Android malware creates a severe threat, motivating the development of machine learning and especially deep learning-based classifiers to detect and mitigate malicious applications. However, these classifiers are susceptible to adversarial attacks that manipulate input data to deceive the classifier and compromise performance. This paper investigates the vulnerability of deep learning-based Android malware classifiers against two adversarial attacks: Data Poisoning with Noise Injection (DP-NI) and Gradient-based Data Poisoning (GDP). In these attacks, we explore the utilization of differential privacy techniques by attackers aiming to compromise the effectiveness of deep learning based Android malware classifiers. We propose and evaluate a novel defense mechanism, Differential Privacy-Based Noise Clipping (DP-NC), designed to enhance the robustness of Android malware classifiers against these adversarial attacks. By leveraging deep neural networks and adversarial training techniques, DP-NC demonstrates remarkable efficacy in mitigating the impact of both DP-NI and GDP attacks. Through extensive experimentation on <em>three</em> diverse Android datasets (Drebin, Contagio, and Genome), we evaluate the performance of DP-NC against proposed adversarial attacks. Our results show that DP-NC significantly reduces the false-positive rate and improves classification accuracy across all datasets and attack scenarios. For instance, our findings on the Drebin dataset reveal a significant decrease in accuracy to 51% and 30% after applying DP-NI and GDP techniques, respectively. However, upon applying the DP-NC defense mechanism, the accuracy in both cases improved to approximately 70%. Furthermore, employing DP-NC defense against DP-NI and GDP attacks leads to a notable reduction in false positive rates by 45.46% and 7.67%, respectively. Similar results have been obtained in two other datasets, Contagio and Genome. These results underscore the effectiveness of DP-NC in enhancing the robustness of deep learning-based Android malware classifiers against adversarial attacks.</p></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":null,"pages":null},"PeriodicalIF":4.8000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Unveiling vulnerabilities in deep learning-based malware detection: Differential privacy driven adversarial attacks\",\"authors\":\"\",\"doi\":\"10.1016/j.cose.2024.104035\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The exponential increase of Android malware creates a severe threat, motivating the development of machine learning and especially deep learning-based classifiers to detect and mitigate malicious applications. However, these classifiers are susceptible to adversarial attacks that manipulate input data to deceive the classifier and compromise performance. This paper investigates the vulnerability of deep learning-based Android malware classifiers against two adversarial attacks: Data Poisoning with Noise Injection (DP-NI) and Gradient-based Data Poisoning (GDP). In these attacks, we explore the utilization of differential privacy techniques by attackers aiming to compromise the effectiveness of deep learning based Android malware classifiers. We propose and evaluate a novel defense mechanism, Differential Privacy-Based Noise Clipping (DP-NC), designed to enhance the robustness of Android malware classifiers against these adversarial attacks. By leveraging deep neural networks and adversarial training techniques, DP-NC demonstrates remarkable efficacy in mitigating the impact of both DP-NI and GDP attacks. Through extensive experimentation on <em>three</em> diverse Android datasets (Drebin, Contagio, and Genome), we evaluate the performance of DP-NC against proposed adversarial attacks. Our results show that DP-NC significantly reduces the false-positive rate and improves classification accuracy across all datasets and attack scenarios. For instance, our findings on the Drebin dataset reveal a significant decrease in accuracy to 51% and 30% after applying DP-NI and GDP techniques, respectively. However, upon applying the DP-NC defense mechanism, the accuracy in both cases improved to approximately 70%. Furthermore, employing DP-NC defense against DP-NI and GDP attacks leads to a notable reduction in false positive rates by 45.46% and 7.67%, respectively. Similar results have been obtained in two other datasets, Contagio and Genome. These results underscore the effectiveness of DP-NC in enhancing the robustness of deep learning-based Android malware classifiers against adversarial attacks.</p></div>\",\"PeriodicalId\":51004,\"journal\":{\"name\":\"Computers & Security\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.8000,\"publicationDate\":\"2024-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers & Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167404824003407\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Security","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167404824003407","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

安卓恶意软件的指数级增长造成了严重威胁,促使人们开发机器学习,特别是基于深度学习的分类器,以检测和减少恶意应用程序。然而,这些分类器很容易受到对抗攻击的影响,对抗攻击会操纵输入数据来欺骗分类器,从而影响分类器的性能。本文研究了基于深度学习的安卓恶意软件分类器在两种对抗性攻击面前的脆弱性:带噪声注入的数据中毒(DP-NI)和基于梯度的数据中毒(GDP)。在这些攻击中,我们探索了攻击者利用差异隐私技术来破坏基于深度学习的安卓恶意软件分类器的有效性。我们提出并评估了一种新型防御机制--基于差异隐私的噪声剪切(DP-NC),旨在增强安卓恶意软件分类器对这些对抗性攻击的鲁棒性。通过利用深度神经网络和对抗性训练技术,DP-NC 在减轻 DP-NI 和 GDP 攻击的影响方面表现出了显著的功效。通过在三个不同的安卓数据集(Drebin、Contagio 和 Genome)上进行广泛的实验,我们评估了 DP-NC 对抗所提出的恶意攻击的性能。结果表明,在所有数据集和攻击场景中,DP-NC 都能显著降低误报率,提高分类准确性。例如,我们对 Drebin 数据集的研究结果表明,在应用 DP-NI 和 GDP 技术后,准确率分别大幅下降至 51% 和 30%。然而,在应用 DP-NC 防御机制后,这两种情况下的准确率都提高到了约 70%。此外,针对 DP-NI 和 GDP 攻击采用 DP-NC 防御后,误报率分别显著降低了 45.46% 和 7.67%。在其他两个数据集 Contagio 和 Genome 中也获得了类似的结果。这些结果凸显了 DP-NC 在增强基于深度学习的安卓恶意软件分类器抵御对抗性攻击的鲁棒性方面的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Unveiling vulnerabilities in deep learning-based malware detection: Differential privacy driven adversarial attacks

The exponential increase of Android malware creates a severe threat, motivating the development of machine learning and especially deep learning-based classifiers to detect and mitigate malicious applications. However, these classifiers are susceptible to adversarial attacks that manipulate input data to deceive the classifier and compromise performance. This paper investigates the vulnerability of deep learning-based Android malware classifiers against two adversarial attacks: Data Poisoning with Noise Injection (DP-NI) and Gradient-based Data Poisoning (GDP). In these attacks, we explore the utilization of differential privacy techniques by attackers aiming to compromise the effectiveness of deep learning based Android malware classifiers. We propose and evaluate a novel defense mechanism, Differential Privacy-Based Noise Clipping (DP-NC), designed to enhance the robustness of Android malware classifiers against these adversarial attacks. By leveraging deep neural networks and adversarial training techniques, DP-NC demonstrates remarkable efficacy in mitigating the impact of both DP-NI and GDP attacks. Through extensive experimentation on three diverse Android datasets (Drebin, Contagio, and Genome), we evaluate the performance of DP-NC against proposed adversarial attacks. Our results show that DP-NC significantly reduces the false-positive rate and improves classification accuracy across all datasets and attack scenarios. For instance, our findings on the Drebin dataset reveal a significant decrease in accuracy to 51% and 30% after applying DP-NI and GDP techniques, respectively. However, upon applying the DP-NC defense mechanism, the accuracy in both cases improved to approximately 70%. Furthermore, employing DP-NC defense against DP-NI and GDP attacks leads to a notable reduction in false positive rates by 45.46% and 7.67%, respectively. Similar results have been obtained in two other datasets, Contagio and Genome. These results underscore the effectiveness of DP-NC in enhancing the robustness of deep learning-based Android malware classifiers against adversarial attacks.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computers & Security
Computers & Security 工程技术-计算机:信息系统
CiteScore
12.40
自引率
7.10%
发文量
365
审稿时长
10.7 months
期刊介绍: Computers & Security is the most respected technical journal in the IT security field. With its high-profile editorial board and informative regular features and columns, the journal is essential reading for IT security professionals around the world. Computers & Security provides you with a unique blend of leading edge research and sound practical management advice. It is aimed at the professional involved with computer security, audit, control and data integrity in all sectors - industry, commerce and academia. Recognized worldwide as THE primary source of reference for applied research and technical expertise it is your first step to fully secure systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信