{"title":"基于深度学习的恶意软件检测器多策略对抗攻击方法","authors":"Wang Yang, Fan Yin","doi":"10.1109/CSP58884.2023.00018","DOIUrl":null,"url":null,"abstract":"Deep learning allows building high-accuracy malware detectors without complicated feature engineering. However, research shows that the deep learning model is vulnerable and can be deceived if attackers add perturbation to input samples to craft adversarial examples deliberately. By altering the pixel values of the images, attackers have been able to generate adversarial examples that can fool state-of-the-art deep learning based image classifiers. However, Windows malware is a structured binary program file. Therefore, arbitrarily altering its contents will often break the program's functionality. In order to solve this problem, a standard but inefficient method is to run the sample in the sandbox to verify whether its functionality is preserved. This paper proposes a multi-strategy adversarial attack method, which can generate malware adversarial examples with functionality-preserving. Our method manipulates the redundant or extended space in the Windows malware binary, so it will not break functionality. Experiments show that our method has a high attack success rate and efficiency.","PeriodicalId":255083,"journal":{"name":"2023 7th International Conference on Cryptography, Security and Privacy (CSP)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Multi-Strategy Adversarial Attack Method for Deep Learning Based Malware Detectors\",\"authors\":\"Wang Yang, Fan Yin\",\"doi\":\"10.1109/CSP58884.2023.00018\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning allows building high-accuracy malware detectors without complicated feature engineering. However, research shows that the deep learning model is vulnerable and can be deceived if attackers add perturbation to input samples to craft adversarial examples deliberately. By altering the pixel values of the images, attackers have been able to generate adversarial examples that can fool state-of-the-art deep learning based image classifiers. However, Windows malware is a structured binary program file. Therefore, arbitrarily altering its contents will often break the program's functionality. In order to solve this problem, a standard but inefficient method is to run the sample in the sandbox to verify whether its functionality is preserved. This paper proposes a multi-strategy adversarial attack method, which can generate malware adversarial examples with functionality-preserving. Our method manipulates the redundant or extended space in the Windows malware binary, so it will not break functionality. Experiments show that our method has a high attack success rate and efficiency.\",\"PeriodicalId\":255083,\"journal\":{\"name\":\"2023 7th International Conference on Cryptography, Security and Privacy (CSP)\",\"volume\":\"43 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 7th International Conference on Cryptography, Security and Privacy (CSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CSP58884.2023.00018\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 7th International Conference on Cryptography, Security and Privacy (CSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSP58884.2023.00018","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Multi-Strategy Adversarial Attack Method for Deep Learning Based Malware Detectors
Deep learning allows building high-accuracy malware detectors without complicated feature engineering. However, research shows that the deep learning model is vulnerable and can be deceived if attackers add perturbation to input samples to craft adversarial examples deliberately. By altering the pixel values of the images, attackers have been able to generate adversarial examples that can fool state-of-the-art deep learning based image classifiers. However, Windows malware is a structured binary program file. Therefore, arbitrarily altering its contents will often break the program's functionality. In order to solve this problem, a standard but inefficient method is to run the sample in the sandbox to verify whether its functionality is preserved. This paper proposes a multi-strategy adversarial attack method, which can generate malware adversarial examples with functionality-preserving. Our method manipulates the redundant or extended space in the Windows malware binary, so it will not break functionality. Experiments show that our method has a high attack success rate and efficiency.