基于特征的图像分类中毒攻击

Robin Mayerhofer, Rudolf Mayer
{"title":"基于特征的图像分类中毒攻击","authors":"Robin Mayerhofer, Rudolf Mayer","doi":"10.1145/3508398.3519363","DOIUrl":null,"url":null,"abstract":"Adversarial machine learning and the robustness of machine learning is gaining attention, especially in image classification. Attacks based on data poisoning, with the aim to lower the integrity or availability of a model, showed high success rates, while barely reducing the classifiers accuracy - particularly against Deep Learning approaches such as Convolutional Neural Networks (CNNs). While Deep Learning has become the most prominent technique for many pattern recognition tasks, feature-extraction based systems still have their applications - and there is surprisingly little research dedicated to the vulnerability of those approaches. We address this gap and show preliminary results in evaluating poisoning attacks against feature-extraction based systems, and compare them to CNNs, on a traffic sign classification dataset. Our findings show that feature-extraction based ML systems require higher poisoning percentages to achieve similar backdoor success, and also need a consistent (static) backdoor position to work.","PeriodicalId":102306,"journal":{"name":"Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Poisoning Attacks against Feature-Based Image Classification\",\"authors\":\"Robin Mayerhofer, Rudolf Mayer\",\"doi\":\"10.1145/3508398.3519363\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Adversarial machine learning and the robustness of machine learning is gaining attention, especially in image classification. Attacks based on data poisoning, with the aim to lower the integrity or availability of a model, showed high success rates, while barely reducing the classifiers accuracy - particularly against Deep Learning approaches such as Convolutional Neural Networks (CNNs). While Deep Learning has become the most prominent technique for many pattern recognition tasks, feature-extraction based systems still have their applications - and there is surprisingly little research dedicated to the vulnerability of those approaches. We address this gap and show preliminary results in evaluating poisoning attacks against feature-extraction based systems, and compare them to CNNs, on a traffic sign classification dataset. Our findings show that feature-extraction based ML systems require higher poisoning percentages to achieve similar backdoor success, and also need a consistent (static) backdoor position to work.\",\"PeriodicalId\":102306,\"journal\":{\"name\":\"Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy\",\"volume\":\"30 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-04-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3508398.3519363\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3508398.3519363","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

对抗性机器学习和机器学习的鲁棒性越来越受到关注,特别是在图像分类方面。基于数据中毒的攻击,旨在降低模型的完整性或可用性,显示出很高的成功率,而几乎没有降低分类器的准确性——特别是针对卷积神经网络(cnn)等深度学习方法。虽然深度学习已经成为许多模式识别任务中最突出的技术,但基于特征提取的系统仍然有它们的应用——令人惊讶的是,很少有研究专门针对这些方法的脆弱性。我们解决了这一差距,并展示了针对基于特征提取的系统评估中毒攻击的初步结果,并在交通标志分类数据集上将其与cnn进行了比较。我们的研究结果表明,基于特征提取的机器学习系统需要更高的中毒百分比才能获得类似的后门成功,并且还需要一致的(静态)后门位置才能工作。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Poisoning Attacks against Feature-Based Image Classification
Adversarial machine learning and the robustness of machine learning is gaining attention, especially in image classification. Attacks based on data poisoning, with the aim to lower the integrity or availability of a model, showed high success rates, while barely reducing the classifiers accuracy - particularly against Deep Learning approaches such as Convolutional Neural Networks (CNNs). While Deep Learning has become the most prominent technique for many pattern recognition tasks, feature-extraction based systems still have their applications - and there is surprisingly little research dedicated to the vulnerability of those approaches. We address this gap and show preliminary results in evaluating poisoning attacks against feature-extraction based systems, and compare them to CNNs, on a traffic sign classification dataset. Our findings show that feature-extraction based ML systems require higher poisoning percentages to achieve similar backdoor success, and also need a consistent (static) backdoor position to work.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信