{"title":"Unveiling vulnerabilities in deep learning-based malware detection: Differential privacy driven adversarial attacks","authors":"","doi":"10.1016/j.cose.2024.104035","DOIUrl":null,"url":null,"abstract":"<div><p>The exponential increase of Android malware creates a severe threat, motivating the development of machine learning and especially deep learning-based classifiers to detect and mitigate malicious applications. However, these classifiers are susceptible to adversarial attacks that manipulate input data to deceive the classifier and compromise performance. This paper investigates the vulnerability of deep learning-based Android malware classifiers against two adversarial attacks: Data Poisoning with Noise Injection (DP-NI) and Gradient-based Data Poisoning (GDP). In these attacks, we explore the utilization of differential privacy techniques by attackers aiming to compromise the effectiveness of deep learning based Android malware classifiers. We propose and evaluate a novel defense mechanism, Differential Privacy-Based Noise Clipping (DP-NC), designed to enhance the robustness of Android malware classifiers against these adversarial attacks. By leveraging deep neural networks and adversarial training techniques, DP-NC demonstrates remarkable efficacy in mitigating the impact of both DP-NI and GDP attacks. Through extensive experimentation on <em>three</em> diverse Android datasets (Drebin, Contagio, and Genome), we evaluate the performance of DP-NC against proposed adversarial attacks. Our results show that DP-NC significantly reduces the false-positive rate and improves classification accuracy across all datasets and attack scenarios. For instance, our findings on the Drebin dataset reveal a significant decrease in accuracy to 51% and 30% after applying DP-NI and GDP techniques, respectively. However, upon applying the DP-NC defense mechanism, the accuracy in both cases improved to approximately 70%. Furthermore, employing DP-NC defense against DP-NI and GDP attacks leads to a notable reduction in false positive rates by 45.46% and 7.67%, respectively. Similar results have been obtained in two other datasets, Contagio and Genome. These results underscore the effectiveness of DP-NC in enhancing the robustness of deep learning-based Android malware classifiers against adversarial attacks.</p></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":null,"pages":null},"PeriodicalIF":4.8000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Security","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167404824003407","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
The exponential increase of Android malware creates a severe threat, motivating the development of machine learning and especially deep learning-based classifiers to detect and mitigate malicious applications. However, these classifiers are susceptible to adversarial attacks that manipulate input data to deceive the classifier and compromise performance. This paper investigates the vulnerability of deep learning-based Android malware classifiers against two adversarial attacks: Data Poisoning with Noise Injection (DP-NI) and Gradient-based Data Poisoning (GDP). In these attacks, we explore the utilization of differential privacy techniques by attackers aiming to compromise the effectiveness of deep learning based Android malware classifiers. We propose and evaluate a novel defense mechanism, Differential Privacy-Based Noise Clipping (DP-NC), designed to enhance the robustness of Android malware classifiers against these adversarial attacks. By leveraging deep neural networks and adversarial training techniques, DP-NC demonstrates remarkable efficacy in mitigating the impact of both DP-NI and GDP attacks. Through extensive experimentation on three diverse Android datasets (Drebin, Contagio, and Genome), we evaluate the performance of DP-NC against proposed adversarial attacks. Our results show that DP-NC significantly reduces the false-positive rate and improves classification accuracy across all datasets and attack scenarios. For instance, our findings on the Drebin dataset reveal a significant decrease in accuracy to 51% and 30% after applying DP-NI and GDP techniques, respectively. However, upon applying the DP-NC defense mechanism, the accuracy in both cases improved to approximately 70%. Furthermore, employing DP-NC defense against DP-NI and GDP attacks leads to a notable reduction in false positive rates by 45.46% and 7.67%, respectively. Similar results have been obtained in two other datasets, Contagio and Genome. These results underscore the effectiveness of DP-NC in enhancing the robustness of deep learning-based Android malware classifiers against adversarial attacks.
期刊介绍:
Computers & Security is the most respected technical journal in the IT security field. With its high-profile editorial board and informative regular features and columns, the journal is essential reading for IT security professionals around the world.
Computers & Security provides you with a unique blend of leading edge research and sound practical management advice. It is aimed at the professional involved with computer security, audit, control and data integrity in all sectors - industry, commerce and academia. Recognized worldwide as THE primary source of reference for applied research and technical expertise it is your first step to fully secure systems.