{"title":"基于最短路径算法的随机森林分类器攻击","authors":"Tianjian Wang, Fuyong Zhang","doi":"10.1109/CACML55074.2022.00039","DOIUrl":null,"url":null,"abstract":"Though learning-based models have shown high performance on different tasks, existing efforts have discovered the vulnerability of classifiers to evasion attacks. Recent work has shown that individual models are more vulnerable than ensemble models in adversarial settings. However, we have empirically demonstrated that ordinary integration methods do not always improve the robustness against black-box attacks, which is more common in the physical world. In this paper, we prove that random forest does not effectively defend against adversarial attacks, even if it is highly discrete. The proposed non-gradient based algorithm can be fast implemented and receives binary feature inputs. We experimentally compared the robustness of random forests and SVMs using white-box and black-box assessments respectively, and show that random forests and decision tree are consistently worse than SVMs.","PeriodicalId":137505,"journal":{"name":"2022 Asia Conference on Algorithms, Computing and Machine Learning (CACML)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Attacking Random Forest Classifiers based on Shortest Path Algorithm\",\"authors\":\"Tianjian Wang, Fuyong Zhang\",\"doi\":\"10.1109/CACML55074.2022.00039\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Though learning-based models have shown high performance on different tasks, existing efforts have discovered the vulnerability of classifiers to evasion attacks. Recent work has shown that individual models are more vulnerable than ensemble models in adversarial settings. However, we have empirically demonstrated that ordinary integration methods do not always improve the robustness against black-box attacks, which is more common in the physical world. In this paper, we prove that random forest does not effectively defend against adversarial attacks, even if it is highly discrete. The proposed non-gradient based algorithm can be fast implemented and receives binary feature inputs. We experimentally compared the robustness of random forests and SVMs using white-box and black-box assessments respectively, and show that random forests and decision tree are consistently worse than SVMs.\",\"PeriodicalId\":137505,\"journal\":{\"name\":\"2022 Asia Conference on Algorithms, Computing and Machine Learning (CACML)\",\"volume\":\"25 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 Asia Conference on Algorithms, Computing and Machine Learning (CACML)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CACML55074.2022.00039\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Asia Conference on Algorithms, Computing and Machine Learning (CACML)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CACML55074.2022.00039","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Attacking Random Forest Classifiers based on Shortest Path Algorithm
Though learning-based models have shown high performance on different tasks, existing efforts have discovered the vulnerability of classifiers to evasion attacks. Recent work has shown that individual models are more vulnerable than ensemble models in adversarial settings. However, we have empirically demonstrated that ordinary integration methods do not always improve the robustness against black-box attacks, which is more common in the physical world. In this paper, we prove that random forest does not effectively defend against adversarial attacks, even if it is highly discrete. The proposed non-gradient based algorithm can be fast implemented and receives binary feature inputs. We experimentally compared the robustness of random forests and SVMs using white-box and black-box assessments respectively, and show that random forests and decision tree are consistently worse than SVMs.