{"title":"Towards A Framework for Preprocessing Analysis of Adversarial Windows Malware","authors":"N. D. Schultz, Adam Duby","doi":"10.1109/ISDFS55398.2022.9800812","DOIUrl":null,"url":null,"abstract":"Machine learning for malware detection and classification has shown promising results. However, motivated adversaries can thwart such classifiers by perturbing the classifier’s input features. Feature perturbation can be realized by transforming the malware, inducing an adversarial drift in the problem space. Realizable adversarial malware is constrained by available software transformations that preserve the malware’s original semantics yet perturb its features enough to cross a classifier’s decision boundary. Further, transformations should be plausible and robust to preprocessing. If a defender can identify and filter the adversarial noise, then the utility of the adversarial approach is decreased. In this paper, we examine common adversarial techniques against a set of constraints that expose each technique’s realizability. Our observations indicate that most adversarial perturbations can be reduced through forensic preprocessing of the malware, highlighting the advantage of forensic analysis prior to classification.","PeriodicalId":114335,"journal":{"name":"2022 10th International Symposium on Digital Forensics and Security (ISDFS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 10th International Symposium on Digital Forensics and Security (ISDFS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISDFS55398.2022.9800812","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Machine learning for malware detection and classification has shown promising results. However, motivated adversaries can thwart such classifiers by perturbing the classifier’s input features. Feature perturbation can be realized by transforming the malware, inducing an adversarial drift in the problem space. Realizable adversarial malware is constrained by available software transformations that preserve the malware’s original semantics yet perturb its features enough to cross a classifier’s decision boundary. Further, transformations should be plausible and robust to preprocessing. If a defender can identify and filter the adversarial noise, then the utility of the adversarial approach is decreased. In this paper, we examine common adversarial techniques against a set of constraints that expose each technique’s realizability. Our observations indicate that most adversarial perturbations can be reduced through forensic preprocessing of the malware, highlighting the advantage of forensic analysis prior to classification.