{"title":"Measuring Machine Learning Robustness in front of Static and Dynamic Adversaries*","authors":"Héctor D. Menéndez","doi":"10.1109/ICTAI56018.2022.00033","DOIUrl":null,"url":null,"abstract":"Adversarial machine learning brought a new way of understanding the reliability of different learning systems. Knowing that the learning confidence depends significantly on small changes, such as noise, created a mind change in the artificial intelligence community, who started to consider the boundaries and limitations of machine learning methods. However, if we can measure these limitations, we can improve the strength of our machine learning models and their robustness. Following this motivation, this work introduces different measures of robustness for machine learning models based on false negatives. These measures can be evaluated for either static or dynamic scenarios, where an adversary performs intelligent actions to evade the system. To evaluate the metrics I have applied 11 classifiers to different benchmark datasets and created an adversary that performs an evolutionary search process aiming to reduce the classification accuracy. The results show that the most robust models are related to K-Nearest Neighbours, Logistic regression, and neural networks, although none of the systems is robust enough when the target is to reach a single misclassification.","PeriodicalId":354314,"journal":{"name":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"139 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI56018.2022.00033","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Adversarial machine learning brought a new way of understanding the reliability of different learning systems. Knowing that the learning confidence depends significantly on small changes, such as noise, created a mind change in the artificial intelligence community, who started to consider the boundaries and limitations of machine learning methods. However, if we can measure these limitations, we can improve the strength of our machine learning models and their robustness. Following this motivation, this work introduces different measures of robustness for machine learning models based on false negatives. These measures can be evaluated for either static or dynamic scenarios, where an adversary performs intelligent actions to evade the system. To evaluate the metrics I have applied 11 classifiers to different benchmark datasets and created an adversary that performs an evolutionary search process aiming to reduce the classification accuracy. The results show that the most robust models are related to K-Nearest Neighbours, Logistic regression, and neural networks, although none of the systems is robust enough when the target is to reach a single misclassification.