基于最短路径算法的随机森林分类器攻击

Tianjian Wang, Fuyong Zhang
{"title":"基于最短路径算法的随机森林分类器攻击","authors":"Tianjian Wang, Fuyong Zhang","doi":"10.1109/CACML55074.2022.00039","DOIUrl":null,"url":null,"abstract":"Though learning-based models have shown high performance on different tasks, existing efforts have discovered the vulnerability of classifiers to evasion attacks. Recent work has shown that individual models are more vulnerable than ensemble models in adversarial settings. However, we have empirically demonstrated that ordinary integration methods do not always improve the robustness against black-box attacks, which is more common in the physical world. In this paper, we prove that random forest does not effectively defend against adversarial attacks, even if it is highly discrete. The proposed non-gradient based algorithm can be fast implemented and receives binary feature inputs. We experimentally compared the robustness of random forests and SVMs using white-box and black-box assessments respectively, and show that random forests and decision tree are consistently worse than SVMs.","PeriodicalId":137505,"journal":{"name":"2022 Asia Conference on Algorithms, Computing and Machine Learning (CACML)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Attacking Random Forest Classifiers based on Shortest Path Algorithm\",\"authors\":\"Tianjian Wang, Fuyong Zhang\",\"doi\":\"10.1109/CACML55074.2022.00039\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Though learning-based models have shown high performance on different tasks, existing efforts have discovered the vulnerability of classifiers to evasion attacks. Recent work has shown that individual models are more vulnerable than ensemble models in adversarial settings. However, we have empirically demonstrated that ordinary integration methods do not always improve the robustness against black-box attacks, which is more common in the physical world. In this paper, we prove that random forest does not effectively defend against adversarial attacks, even if it is highly discrete. The proposed non-gradient based algorithm can be fast implemented and receives binary feature inputs. We experimentally compared the robustness of random forests and SVMs using white-box and black-box assessments respectively, and show that random forests and decision tree are consistently worse than SVMs.\",\"PeriodicalId\":137505,\"journal\":{\"name\":\"2022 Asia Conference on Algorithms, Computing and Machine Learning (CACML)\",\"volume\":\"25 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 Asia Conference on Algorithms, Computing and Machine Learning (CACML)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CACML55074.2022.00039\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Asia Conference on Algorithms, Computing and Machine Learning (CACML)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CACML55074.2022.00039","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

尽管基于学习的模型在不同的任务上表现出了很高的性能,但现有的研究已经发现了分类器在逃避攻击下的脆弱性。最近的研究表明,在对抗环境下,个体模型比整体模型更脆弱。然而,我们已经通过经验证明,普通的集成方法并不总是能够提高对黑盒攻击的鲁棒性,黑盒攻击在物理世界中更为常见。在本文中,我们证明了随机森林不能有效地防御对抗性攻击,即使它是高度离散的。提出的非梯度算法实现速度快,并能接收二值特征输入。我们通过实验比较了随机森林和支持向量机的鲁棒性,分别使用白盒和黑盒评估,结果表明随机森林和决策树的鲁棒性始终不如支持向量机。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Attacking Random Forest Classifiers based on Shortest Path Algorithm
Though learning-based models have shown high performance on different tasks, existing efforts have discovered the vulnerability of classifiers to evasion attacks. Recent work has shown that individual models are more vulnerable than ensemble models in adversarial settings. However, we have empirically demonstrated that ordinary integration methods do not always improve the robustness against black-box attacks, which is more common in the physical world. In this paper, we prove that random forest does not effectively defend against adversarial attacks, even if it is highly discrete. The proposed non-gradient based algorithm can be fast implemented and receives binary feature inputs. We experimentally compared the robustness of random forests and SVMs using white-box and black-box assessments respectively, and show that random forests and decision tree are consistently worse than SVMs.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信