{"title":"GBKPA 和 AuxShield:解决安卓恶意软件检测中的对抗鲁棒性和可转移性问题","authors":"Kumarakrishna Valeti, Hemant Rathore","doi":"10.1016/j.fsidi.2024.301816","DOIUrl":null,"url":null,"abstract":"<div><div>Android stands as the predominant operating system within the mobile ecosystem. Users can download applications from official sources like <em>Google Play Store</em> and other third-party platforms. However, malicious actors can attempt to compromise user device integrity through malicious applications. Traditionally, signatures, rules, and other methods have been employed to detect malware attacks and protect device integrity. However, the growing number and complexity of malicious applications have prompted the exploration of newer techniques like machine learning (ML) and deep learning (DL). Many recent studies have demonstrated promising results in detecting malicious applications using ML and DL solutions. However, research in other fields, such as computer vision, has shown that ML and DL solutions are vulnerable to targeted adversarial attacks. Malicious actors can develop malicious adversarial applications that can bypass ML and DL based anti-viruses. The study of adversarial techniques related to malware detection has now captured the security community’s attention. In this work, we utilise android permissions and intents to construct 28 distinct malware detection models using 14 classification algorithms. Later, we introduce a novel targeted false-negative evasion attack, <em>Gradient Based K Perturbation Attack (GBKPA)</em>, designed for grey-box knowledge scenarios to assess the robustness of these models. The GBKPA attempts to craft malicious adversarial samples by making minimal perturbations without violating the syntactic and functional structure of the application. GBKPA achieved an average fooling rate (FR) of 77 % with only five perturbations across the 28 detection models. Additionally, we identified the most vulnerable android permissions and intents that malicious actors can exploit for evasion attacks. Furthermore, we analyse the transferability of adversarial samples across different classes of models and provide explanations for the same. Finally, we proposed <em>AuxShield</em> defence mechanism to develop robust detection models. AuxShield reduced the average FR to 3.25 % against 28 detection models. Our findings underscore the need to understand the causation of adversarial samples, their transferability, and robust defence strategies before deploying ML and DL solutions in the real world.</div></div>","PeriodicalId":48481,"journal":{"name":"Forensic Science International-Digital Investigation","volume":null,"pages":null},"PeriodicalIF":2.0000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"GBKPA and AuxShield: Addressing adversarial robustness and transferability in android malware detection\",\"authors\":\"Kumarakrishna Valeti, Hemant Rathore\",\"doi\":\"10.1016/j.fsidi.2024.301816\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Android stands as the predominant operating system within the mobile ecosystem. Users can download applications from official sources like <em>Google Play Store</em> and other third-party platforms. However, malicious actors can attempt to compromise user device integrity through malicious applications. Traditionally, signatures, rules, and other methods have been employed to detect malware attacks and protect device integrity. However, the growing number and complexity of malicious applications have prompted the exploration of newer techniques like machine learning (ML) and deep learning (DL). Many recent studies have demonstrated promising results in detecting malicious applications using ML and DL solutions. However, research in other fields, such as computer vision, has shown that ML and DL solutions are vulnerable to targeted adversarial attacks. Malicious actors can develop malicious adversarial applications that can bypass ML and DL based anti-viruses. The study of adversarial techniques related to malware detection has now captured the security community’s attention. In this work, we utilise android permissions and intents to construct 28 distinct malware detection models using 14 classification algorithms. Later, we introduce a novel targeted false-negative evasion attack, <em>Gradient Based K Perturbation Attack (GBKPA)</em>, designed for grey-box knowledge scenarios to assess the robustness of these models. The GBKPA attempts to craft malicious adversarial samples by making minimal perturbations without violating the syntactic and functional structure of the application. GBKPA achieved an average fooling rate (FR) of 77 % with only five perturbations across the 28 detection models. Additionally, we identified the most vulnerable android permissions and intents that malicious actors can exploit for evasion attacks. Furthermore, we analyse the transferability of adversarial samples across different classes of models and provide explanations for the same. Finally, we proposed <em>AuxShield</em> defence mechanism to develop robust detection models. AuxShield reduced the average FR to 3.25 % against 28 detection models. Our findings underscore the need to understand the causation of adversarial samples, their transferability, and robust defence strategies before deploying ML and DL solutions in the real world.</div></div>\",\"PeriodicalId\":48481,\"journal\":{\"name\":\"Forensic Science International-Digital Investigation\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.0000,\"publicationDate\":\"2024-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Forensic Science International-Digital Investigation\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666281724001409\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Forensic Science International-Digital Investigation","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666281724001409","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
摘要
安卓是移动生态系统中最主要的操作系统。用户可以从官方渠道下载应用程序,如 Google Play 商店和其他第三方平台。然而,恶意行为者可能试图通过恶意应用程序破坏用户设备的完整性。传统上,人们采用签名、规则和其他方法来检测恶意软件攻击并保护设备完整性。然而,恶意应用程序的数量和复杂性不断增加,促使人们探索机器学习(ML)和深度学习(DL)等更新的技术。最近的许多研究表明,使用 ML 和 DL 解决方案检测恶意应用程序的效果很好。然而,计算机视觉等其他领域的研究表明,ML 和 DL 解决方案很容易受到有针对性的恶意攻击。恶意行为者可以开发恶意对抗应用程序,绕过基于 ML 和 DL 的反病毒程序。与恶意软件检测相关的对抗技术研究现已引起了安全界的关注。在这项工作中,我们利用安卓权限和意图,使用 14 种分类算法构建了 28 种不同的恶意软件检测模型。随后,我们引入了一种新颖的有针对性的假阴性规避攻击--基于梯度的 K Perturbation 攻击(GBKPA),该攻击专为灰盒知识场景设计,用于评估这些模型的鲁棒性。GBKPA 尝试在不违反应用程序语法和功能结构的前提下,通过最小的扰动来制作恶意对抗样本。在 28 个检测模型中,GBKPA 只用了 5 次扰动,就实现了 77% 的平均欺骗率 (FR)。此外,我们还确定了恶意行为者可用于规避攻击的最脆弱的安卓权限和意图。此外,我们还分析了对抗样本在不同类别模型中的可转移性,并提供了相应的解释。最后,我们提出了 AuxShield 防御机制,以开发稳健的检测模型。在 28 个检测模型中,AuxShield 将平均 FR 降低到 3.25%。我们的研究结果强调,在现实世界中部署 ML 和 DL 解决方案之前,有必要了解对抗样本的成因、其可转移性以及稳健的防御策略。
GBKPA and AuxShield: Addressing adversarial robustness and transferability in android malware detection
Android stands as the predominant operating system within the mobile ecosystem. Users can download applications from official sources like Google Play Store and other third-party platforms. However, malicious actors can attempt to compromise user device integrity through malicious applications. Traditionally, signatures, rules, and other methods have been employed to detect malware attacks and protect device integrity. However, the growing number and complexity of malicious applications have prompted the exploration of newer techniques like machine learning (ML) and deep learning (DL). Many recent studies have demonstrated promising results in detecting malicious applications using ML and DL solutions. However, research in other fields, such as computer vision, has shown that ML and DL solutions are vulnerable to targeted adversarial attacks. Malicious actors can develop malicious adversarial applications that can bypass ML and DL based anti-viruses. The study of adversarial techniques related to malware detection has now captured the security community’s attention. In this work, we utilise android permissions and intents to construct 28 distinct malware detection models using 14 classification algorithms. Later, we introduce a novel targeted false-negative evasion attack, Gradient Based K Perturbation Attack (GBKPA), designed for grey-box knowledge scenarios to assess the robustness of these models. The GBKPA attempts to craft malicious adversarial samples by making minimal perturbations without violating the syntactic and functional structure of the application. GBKPA achieved an average fooling rate (FR) of 77 % with only five perturbations across the 28 detection models. Additionally, we identified the most vulnerable android permissions and intents that malicious actors can exploit for evasion attacks. Furthermore, we analyse the transferability of adversarial samples across different classes of models and provide explanations for the same. Finally, we proposed AuxShield defence mechanism to develop robust detection models. AuxShield reduced the average FR to 3.25 % against 28 detection models. Our findings underscore the need to understand the causation of adversarial samples, their transferability, and robust defence strategies before deploying ML and DL solutions in the real world.