{"title":"A Metric for Machine Learning Vulnerability to Adversarial Examples","authors":"M. Bradley, Shengjie Xu","doi":"10.1109/INFOCOMWKSHPS51825.2021.9484430","DOIUrl":null,"url":null,"abstract":"Recent studies in the field of Adversarial Machine Learning (AML) have primarily focused on techniques for poisoning and manipulating the Machine Learning (ML) systems for operations such as malware identification and image recognition. While the offensive perspective of such systems is increasingly well documented, the work approaching the problem from the defensive standpoint is sparse. In this paper, we define a metric for quantizing the vulnerability or susceptibility of a given ML model to adversarial manipulation using only properties inherent to the model under examination. This metric will be shown to have several useful properties related to known features of classifier-based ML systems and is intended as a tool to broadly compare the security of various competing ML models based on their maximum potential susceptibility to adversarial manipulation.","PeriodicalId":109588,"journal":{"name":"IEEE INFOCOM 2021 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE INFOCOM 2021 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFOCOMWKSHPS51825.2021.9484430","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recent studies in the field of Adversarial Machine Learning (AML) have primarily focused on techniques for poisoning and manipulating the Machine Learning (ML) systems for operations such as malware identification and image recognition. While the offensive perspective of such systems is increasingly well documented, the work approaching the problem from the defensive standpoint is sparse. In this paper, we define a metric for quantizing the vulnerability or susceptibility of a given ML model to adversarial manipulation using only properties inherent to the model under examination. This metric will be shown to have several useful properties related to known features of classifier-based ML systems and is intended as a tool to broadly compare the security of various competing ML models based on their maximum potential susceptibility to adversarial manipulation.